This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-10-09 20:48
Elapsed28m26s
Revision
Buildergke-prow-ssd-pool-1a225945-7nsk
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/d8565f31-d824-48b8-a965-fc9f9deadf90/targets/test'}}
pod0d655c5f-ead6-11e9-bb6b-9a8df04ecf6a
resultstorehttps://source.cloud.google.com/results/invocations/d8565f31-d824-48b8-a965-fc9f9deadf90/targets/test
infra-commit0492f1d6e
pod0d655c5f-ead6-11e9-bb6b-9a8df04ecf6a
repok8s.io/kubernetes
repo-commit9b200ae4c3a4585edd43a2111a45e57024083ed9
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeProvision 10s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeProvision$
=== RUN   TestVolumeProvision
W1009 21:15:14.875794  110340 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1009 21:15:14.875823  110340 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1009 21:15:14.875858  110340 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1009 21:15:14.875869  110340 master.go:261] Using reconciler: 
I1009 21:15:14.877883  110340 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.878280  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.878343  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.879285  110340 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1009 21:15:14.879339  110340 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.879466  110340 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1009 21:15:14.879608  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.879636  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.880618  110340 store.go:1342] Monitoring events count at <storage-prefix>//events
I1009 21:15:14.880702  110340 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.880804  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.880977  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.881026  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.881031  110340 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1009 21:15:14.881659  110340 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1009 21:15:14.881716  110340 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.881890  110340 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1009 21:15:14.881952  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.881977  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.882477  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.882595  110340 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1009 21:15:14.882623  110340 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1009 21:15:14.882781  110340 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.882952  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.882975  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.883679  110340 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1009 21:15:14.883876  110340 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.884057  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.884078  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.884146  110340 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1009 21:15:14.884301  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.885084  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.885546  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.885875  110340 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1009 21:15:14.885978  110340 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1009 21:15:14.886069  110340 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.886183  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.886202  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.887286  110340 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1009 21:15:14.887366  110340 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1009 21:15:14.887547  110340 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.887685  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.887705  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.888461  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.888526  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.889392  110340 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1009 21:15:14.889425  110340 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1009 21:15:14.890805  110340 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.890885  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.890984  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.891007  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.891919  110340 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1009 21:15:14.892140  110340 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.892213  110340 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1009 21:15:14.892412  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.892439  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.892976  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.893283  110340 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1009 21:15:14.893344  110340 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1009 21:15:14.893535  110340 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.893738  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.893765  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.894451  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.894933  110340 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1009 21:15:14.894955  110340 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1009 21:15:14.895200  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.895433  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.895462  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.896285  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.897210  110340 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1009 21:15:14.897317  110340 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1009 21:15:14.897351  110340 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.897454  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.897473  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.898874  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.899566  110340 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1009 21:15:14.899855  110340 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.900035  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.900054  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.900146  110340 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1009 21:15:14.901623  110340 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1009 21:15:14.901679  110340 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1009 21:15:14.901688  110340 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.902150  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.902172  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.902534  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.903644  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.904284  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.904318  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.905416  110340 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.905599  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.905621  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.906423  110340 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1009 21:15:14.906475  110340 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1009 21:15:14.906509  110340 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1009 21:15:14.907088  110340 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.907407  110340 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.907738  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.908595  110340 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.909660  110340 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.910404  110340 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.911084  110340 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.911542  110340 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.911685  110340 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.911927  110340 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.912390  110340 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.912992  110340 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.913227  110340 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.913876  110340 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.914135  110340 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.914546  110340 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.914710  110340 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.915317  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.915571  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.915691  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.915769  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.915914  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.916131  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.916326  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.916941  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.917113  110340 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.917768  110340 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.918392  110340 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.918598  110340 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.918753  110340 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.919424  110340 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.919781  110340 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.920413  110340 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.921088  110340 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.921777  110340 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.922613  110340 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.922927  110340 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.923046  110340 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1009 21:15:14.923067  110340 master.go:464] Enabling API group "authentication.k8s.io".
I1009 21:15:14.923206  110340 master.go:464] Enabling API group "authorization.k8s.io".
I1009 21:15:14.923542  110340 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.923738  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.923771  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.925417  110340 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1009 21:15:14.925564  110340 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1009 21:15:14.925645  110340 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.925887  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.926127  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.927357  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.927981  110340 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1009 21:15:14.928072  110340 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1009 21:15:14.930051  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.930504  110340 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.930964  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.931001  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.932113  110340 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1009 21:15:14.932164  110340 master.go:464] Enabling API group "autoscaling".
I1009 21:15:14.932180  110340 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1009 21:15:14.932372  110340 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.932531  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.932558  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.933697  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.934345  110340 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1009 21:15:14.934479  110340 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1009 21:15:14.934922  110340 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.935107  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.935134  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.935965  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.936318  110340 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1009 21:15:14.936350  110340 master.go:464] Enabling API group "batch".
I1009 21:15:14.936453  110340 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1009 21:15:14.936562  110340 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.936718  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.936744  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.937541  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.937579  110340 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1009 21:15:14.937609  110340 master.go:464] Enabling API group "certificates.k8s.io".
I1009 21:15:14.937670  110340 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1009 21:15:14.938174  110340 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.938327  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.938347  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.939204  110340 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1009 21:15:14.939239  110340 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1009 21:15:14.939269  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.939410  110340 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.939551  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.939575  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.940453  110340 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1009 21:15:14.940476  110340 master.go:464] Enabling API group "coordination.k8s.io".
I1009 21:15:14.940492  110340 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1009 21:15:14.940610  110340 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1009 21:15:14.940872  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.941042  110340 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.941258  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.941312  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.941819  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.942434  110340 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1009 21:15:14.942467  110340 master.go:464] Enabling API group "extensions".
I1009 21:15:14.942551  110340 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1009 21:15:14.942641  110340 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.942761  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.942782  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.943671  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.944588  110340 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1009 21:15:14.944653  110340 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1009 21:15:14.944985  110340 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.945475  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.945513  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.945822  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.946865  110340 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1009 21:15:14.946928  110340 master.go:464] Enabling API group "networking.k8s.io".
I1009 21:15:14.946941  110340 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1009 21:15:14.946991  110340 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.947214  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.947236  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.947820  110340 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1009 21:15:14.947929  110340 master.go:464] Enabling API group "node.k8s.io".
I1009 21:15:14.948075  110340 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1009 21:15:14.948132  110340 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.948359  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.948399  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.948943  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.949852  110340 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1009 21:15:14.949896  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.949931  110340 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1009 21:15:14.950069  110340 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.950483  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.950514  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.950687  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.951659  110340 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1009 21:15:14.951685  110340 master.go:464] Enabling API group "policy".
I1009 21:15:14.951781  110340 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1009 21:15:14.951769  110340 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.952202  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.952265  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.953009  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.953163  110340 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1009 21:15:14.953273  110340 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1009 21:15:14.953503  110340 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.953870  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.953910  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.954622  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.955125  110340 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1009 21:15:14.955299  110340 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.955329  110340 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1009 21:15:14.955477  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.955504  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.956513  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.957083  110340 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1009 21:15:14.957223  110340 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1009 21:15:14.957328  110340 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.957476  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.957501  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.958814  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.959782  110340 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1009 21:15:14.959795  110340 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1009 21:15:14.959898  110340 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.960443  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.960479  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.961607  110340 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1009 21:15:14.961670  110340 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1009 21:15:14.962285  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.962375  110340 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.962647  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.962667  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.962692  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.963702  110340 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1009 21:15:14.963783  110340 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1009 21:15:14.964498  110340 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.964612  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.964679  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.964700  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.965567  110340 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1009 21:15:14.965660  110340 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1009 21:15:14.966223  110340 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.966551  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.966573  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.967053  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.967519  110340 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1009 21:15:14.967577  110340 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1009 21:15:14.967598  110340 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1009 21:15:14.968775  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.970111  110340 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.970410  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.970462  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.971562  110340 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1009 21:15:14.971820  110340 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1009 21:15:14.971801  110340 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.971995  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.972019  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.973054  110340 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1009 21:15:14.973073  110340 master.go:464] Enabling API group "scheduling.k8s.io".
I1009 21:15:14.973110  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.973169  110340 master.go:453] Skipping disabled API group "settings.k8s.io".
I1009 21:15:14.973340  110340 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.973501  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.973523  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.973584  110340 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1009 21:15:14.974423  110340 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1009 21:15:14.974593  110340 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.974721  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.974742  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.974768  110340 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1009 21:15:14.975370  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.975385  110340 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1009 21:15:14.975439  110340 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.975550  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.975568  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.975573  110340 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1009 21:15:14.976730  110340 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1009 21:15:14.976790  110340 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.976807  110340 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1009 21:15:14.976862  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.977445  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.977474  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.977582  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.978671  110340 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1009 21:15:14.978953  110340 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1009 21:15:14.979196  110340 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.979994  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.980027  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.980171  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.980572  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.980998  110340 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1009 21:15:14.981052  110340 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1009 21:15:14.981233  110340 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.981460  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.981488  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.982118  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.982358  110340 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1009 21:15:14.982395  110340 master.go:464] Enabling API group "storage.k8s.io".
I1009 21:15:14.982447  110340 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1009 21:15:14.982618  110340 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.982786  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.982810  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.983421  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.983701  110340 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1009 21:15:14.983807  110340 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1009 21:15:14.984006  110340 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.984177  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.984201  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.985025  110340 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1009 21:15:14.985157  110340 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1009 21:15:14.985375  110340 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.985739  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.985773  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.985880  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.986120  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.987246  110340 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1009 21:15:14.987340  110340 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1009 21:15:14.987455  110340 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.987650  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.987673  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.988121  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.988559  110340 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1009 21:15:14.988742  110340 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1009 21:15:14.988795  110340 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.989289  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.989327  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.989467  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.990223  110340 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1009 21:15:14.990280  110340 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1009 21:15:14.990318  110340 master.go:464] Enabling API group "apps".
I1009 21:15:14.990391  110340 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.990551  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.990603  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.991941  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.992824  110340 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1009 21:15:14.992927  110340 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.992948  110340 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1009 21:15:14.993098  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.993112  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.994172  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.994645  110340 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1009 21:15:14.994692  110340 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1009 21:15:14.994719  110340 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.994923  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.994951  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.995736  110340 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1009 21:15:14.995794  110340 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.995928  110340 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1009 21:15:14.995972  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.995978  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.996050  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:14.997091  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.997446  110340 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1009 21:15:14.997514  110340 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1009 21:15:14.997560  110340 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1009 21:15:14.998241  110340 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:14.998979  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:14.999798  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:14.999825  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:15.000662  110340 store.go:1342] Monitoring events count at <storage-prefix>//events
I1009 21:15:15.000683  110340 master.go:464] Enabling API group "events.k8s.io".
I1009 21:15:15.000690  110340 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1009 21:15:15.001065  110340 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.001316  110340 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.001557  110340 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.001765  110340 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.002031  110340 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.002235  110340 watch_cache.go:451] Replace watchCache (rev: 57853) 
I1009 21:15:15.002329  110340 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.002767  110340 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.003164  110340 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.003427  110340 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.003710  110340 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.004826  110340 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.005247  110340 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.006626  110340 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.007045  110340 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.008380  110340 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.008968  110340 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.009883  110340 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.010375  110340 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.011416  110340 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.011743  110340 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1009 21:15:15.011792  110340 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1009 21:15:15.012781  110340 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.012981  110340 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.013295  110340 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.014308  110340 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.015992  110340 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.017315  110340 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.017633  110340 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.019096  110340 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.020202  110340 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.020575  110340 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.021685  110340 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1009 21:15:15.021874  110340 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1009 21:15:15.023264  110340 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.023651  110340 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.024675  110340 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.025499  110340 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.026378  110340 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.027534  110340 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.028595  110340 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.029495  110340 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.030081  110340 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.031173  110340 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.032607  110340 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1009 21:15:15.032714  110340 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1009 21:15:15.033343  110340 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.034085  110340 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1009 21:15:15.034231  110340 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1009 21:15:15.034959  110340 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.035904  110340 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.036259  110340 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.036773  110340 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.037298  110340 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.037974  110340 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.038506  110340 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1009 21:15:15.038562  110340 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1009 21:15:15.039616  110340 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.040755  110340 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.041106  110340 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.041998  110340 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.042532  110340 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.043048  110340 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.043569  110340 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.043905  110340 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.044286  110340 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.045307  110340 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.045635  110340 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.045909  110340 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1009 21:15:15.046016  110340 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1009 21:15:15.046120  110340 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1009 21:15:15.046991  110340 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.047588  110340 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.048705  110340 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.049536  110340 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.050394  110340 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6f243518-070e-43d7-8ff6-ea97e6b7a363", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1009 21:15:15.055342  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.055378  110340 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1009 21:15:15.055389  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.055400  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.055409  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.055416  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.055451  110340 httplog.go:90] GET /healthz: (376.721µs) 0 [Go-http-client/1.1 127.0.0.1:43352]
I1009 21:15:15.057948  110340 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.285435ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:15.062609  110340 httplog.go:90] GET /api/v1/services: (2.17357ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:15.067312  110340 httplog.go:90] GET /api/v1/services: (1.217548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:15.070911  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.070952  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.070965  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.070975  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.070983  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.071028  110340 httplog.go:90] GET /healthz: (369.767µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:15.073623  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.96221ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43352]
I1009 21:15:15.074764  110340 httplog.go:90] GET /api/v1/services: (2.237949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:15.074939  110340 httplog.go:90] GET /api/v1/services: (1.888676ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.077210  110340 httplog.go:90] POST /api/v1/namespaces: (3.007157ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43352]
I1009 21:15:15.079107  110340 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.228642ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.081589  110340 httplog.go:90] POST /api/v1/namespaces: (2.144663ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.083078  110340 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.155311ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.084819  110340 httplog.go:90] POST /api/v1/namespaces: (1.447802ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.156523  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.156639  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.156649  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.156656  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.156662  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.156703  110340 httplog.go:90] GET /healthz: (480.893µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:15.171920  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.171958  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.171989  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.171999  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.172061  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.172111  110340 httplog.go:90] GET /healthz: (347.91µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.256417  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.256455  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.256467  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.256477  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.256485  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.256516  110340 httplog.go:90] GET /healthz: (245.648µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:15.271982  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.272022  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.272031  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.272038  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.272044  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.272073  110340 httplog.go:90] GET /healthz: (312.131µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.356286  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.356329  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.356341  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.356351  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.356359  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.356405  110340 httplog.go:90] GET /healthz: (287.606µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:15.371915  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.371956  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.371968  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.371976  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.371984  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.372038  110340 httplog.go:90] GET /healthz: (307.291µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.456245  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.456278  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.456287  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.456293  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.456298  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.456340  110340 httplog.go:90] GET /healthz: (263.692µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:15.471962  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.471997  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.472006  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.472013  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.472029  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.472068  110340 httplog.go:90] GET /healthz: (366.498µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.556239  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.556282  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.556295  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.556305  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.556313  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.556356  110340 httplog.go:90] GET /healthz: (311.468µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:15.571912  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.571955  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.571968  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.571977  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.571987  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.572033  110340 httplog.go:90] GET /healthz: (350.856µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.656281  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.656318  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.656334  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.656340  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.656347  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.656380  110340 httplog.go:90] GET /healthz: (251.911µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:15.671941  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.671978  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.671987  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.671994  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.671999  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.672026  110340 httplog.go:90] GET /healthz: (266.278µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.756433  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.756473  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.756483  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.756490  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.756496  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.756552  110340 httplog.go:90] GET /healthz: (301.367µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:15.771957  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.771994  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.772017  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.772040  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.772046  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.772103  110340 httplog.go:90] GET /healthz: (324.392µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.856282  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.856330  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.856339  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.856345  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.856352  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.856387  110340 httplog.go:90] GET /healthz: (278.05µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:15.872111  110340 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1009 21:15:15.872165  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.872192  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.872202  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.872211  110340 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.872265  110340 httplog.go:90] GET /healthz: (410.921µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:15.875618  110340 client.go:361] parsed scheme: "endpoint"
I1009 21:15:15.875719  110340 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1009 21:15:15.957438  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.957474  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.957484  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.957493  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.957561  110340 httplog.go:90] GET /healthz: (1.448264ms) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:15.972715  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:15.972749  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:15.972758  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:15.972766  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:15.972810  110340 httplog.go:90] GET /healthz: (1.086328ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.056043  110340 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.311394ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.056584  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.276076ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43388]
I1009 21:15:16.057915  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (974.595µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43388]
I1009 21:15:16.058342  110340 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.718326ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.058404  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.058434  110340 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1009 21:15:16.058444  110340 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1009 21:15:16.058452  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1009 21:15:16.058481  110340 httplog.go:90] GET /healthz: (972.659µs) 0 [Go-http-client/1.1 127.0.0.1:43390]
I1009 21:15:16.058507  110340 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1009 21:15:16.059141  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.314879ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.059650  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.368748ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43388]
I1009 21:15:16.059668  110340 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.062331ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.060697  110340 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (943.749µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.060761  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (673.912µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43388]
I1009 21:15:16.062044  110340 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.901389ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.062049  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (829.542µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43390]
I1009 21:15:16.062250  110340 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1009 21:15:16.062296  110340 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1009 21:15:16.063079  110340 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.816191ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.063498  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.061425ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43390]
I1009 21:15:16.065101  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.125495ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.066103  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (740.153µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.067060  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (659.231µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.068187  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (842.42µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.070188  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.633348ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.070425  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1009 21:15:16.071526  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (936.51µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.073655  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.073689  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.073720  110340 httplog.go:90] GET /healthz: (1.700797ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.073665  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.766992ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.073993  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1009 21:15:16.075296  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.066022ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.078193  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.490139ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.078462  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1009 21:15:16.079898  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.111716ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.082431  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.157595ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.082749  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1009 21:15:16.084146  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (985.754µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.087074  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.156924ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.087362  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1009 21:15:16.089582  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.672991ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.092921  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.593989ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.093338  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1009 21:15:16.094995  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.468562ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.097584  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.126995ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.098074  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1009 21:15:16.099369  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.056143ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.102047  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.096825ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.102369  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1009 21:15:16.103784  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.070576ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.106315  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.897113ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.106586  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1009 21:15:16.108226  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.384243ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.111733  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.400553ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.112075  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1009 21:15:16.113646  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.374727ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.117575  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.277814ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.118396  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1009 21:15:16.119974  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.301239ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.123545  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.82775ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.124179  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1009 21:15:16.125773  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.241245ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.128603  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.122637ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.129146  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1009 21:15:16.130624  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.219076ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.133450  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.228423ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.133882  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1009 21:15:16.135426  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.247969ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.138019  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.909745ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.138268  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1009 21:15:16.139741  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.212202ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.142062  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.818072ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.142390  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1009 21:15:16.144026  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.400584ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.146474  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.869269ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.146695  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1009 21:15:16.148384  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.242673ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.151321  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.42027ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.151590  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1009 21:15:16.153377  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.564808ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.156046  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.251275ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.156236  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1009 21:15:16.157007  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.157066  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.157102  110340 httplog.go:90] GET /healthz: (1.068011ms) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:16.157603  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.132389ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.160124  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.801154ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.160392  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1009 21:15:16.162024  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.380851ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.165051  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.394985ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.165345  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1009 21:15:16.166910  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.065888ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.169347  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.914514ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.169700  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1009 21:15:16.171388  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.248565ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.172713  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.172767  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.172800  110340 httplog.go:90] GET /healthz: (1.156516ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.173822  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.906535ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.174213  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1009 21:15:16.175406  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (965.766µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.178106  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.208943ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.178368  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1009 21:15:16.179670  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.11922ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.182140  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.935046ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.182453  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1009 21:15:16.183889  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (999.898µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.186645  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.176419ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.187020  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1009 21:15:16.188713  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.428496ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.191214  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.902438ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.191456  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1009 21:15:16.192857  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.071107ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.195343  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.004151ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.195777  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1009 21:15:16.197431  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.191248ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.200644  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.507637ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.200903  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1009 21:15:16.202379  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.203314ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.204717  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.872031ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.204993  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1009 21:15:16.206624  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.297212ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.209515  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.247423ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.209858  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1009 21:15:16.211551  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.342913ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.214312  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.10566ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.214538  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1009 21:15:16.216014  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.066547ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.218417  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.925971ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.218770  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1009 21:15:16.220347  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.27722ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.223188  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.23316ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.223466  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1009 21:15:16.225463  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.680644ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.227788  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.837765ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.228080  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1009 21:15:16.229572  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.237981ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.232971  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.949368ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.233240  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1009 21:15:16.234716  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.199222ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.237712  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.35638ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.238039  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1009 21:15:16.239591  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.232241ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.242234  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.034664ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.242482  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1009 21:15:16.243753  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.029557ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.246043  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.730936ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.246396  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1009 21:15:16.247787  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.047417ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.250663  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.254704ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.251076  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1009 21:15:16.252445  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.081719ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.254650  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.632784ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.255262  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1009 21:15:16.256653  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.058846ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.257251  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.257280  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.257311  110340 httplog.go:90] GET /healthz: (1.337239ms) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:16.258945  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.664445ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.259116  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1009 21:15:16.260151  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (832.723µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.262411  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.588532ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.263396  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1009 21:15:16.264653  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.026785ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.266481  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.401585ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.266908  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1009 21:15:16.268276  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.055914ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.270467  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.557041ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.270794  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1009 21:15:16.272197  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (961.379µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.272465  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.272768  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.273041  110340 httplog.go:90] GET /healthz: (1.411794ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.275630  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.856966ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.276142  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1009 21:15:16.277530  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.024888ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.280137  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.99907ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.280481  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1009 21:15:16.282010  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.187579ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.285732  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.026142ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.286069  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1009 21:15:16.287644  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.283565ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.291104  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.731862ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.291556  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1009 21:15:16.293412  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.409984ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.296247  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.074259ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.296468  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1009 21:15:16.297887  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.19469ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.300437  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.902056ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.300913  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1009 21:15:16.302196  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.057692ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.317476  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.421796ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.317963  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1009 21:15:16.336865  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.763182ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.357237  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.357525  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.357805  110340 httplog.go:90] GET /healthz: (1.698585ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:16.358513  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.405415ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.358766  110340 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1009 21:15:16.373290  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.373591  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.373950  110340 httplog.go:90] GET /healthz: (2.177859ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.376583  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.528772ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.398056  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.777416ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.398495  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1009 21:15:16.416987  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.909815ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.437418  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.395458ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.437684  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1009 21:15:16.456609  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.557189ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.456987  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.457043  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.457090  110340 httplog.go:90] GET /healthz: (1.022679ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:16.473415  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.473477  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.473541  110340 httplog.go:90] GET /healthz: (1.544093ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.477384  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.435709ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.477763  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1009 21:15:16.497874  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (2.698397ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.517251  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.102765ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.517645  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1009 21:15:16.536684  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.558667ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.557078  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.557112  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.557149  110340 httplog.go:90] GET /healthz: (1.109471ms) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:16.559172  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.151794ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.559478  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1009 21:15:16.573448  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.573482  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.573518  110340 httplog.go:90] GET /healthz: (1.678189ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.576710  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.111132ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.598093  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.87997ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.598425  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1009 21:15:16.616778  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.751239ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.638029  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.910706ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.638558  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1009 21:15:16.656721  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.737576ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.657027  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.657096  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.657163  110340 httplog.go:90] GET /healthz: (935.316µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:16.673038  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.673078  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.673118  110340 httplog.go:90] GET /healthz: (1.281191ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.677062  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.020056ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.677374  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1009 21:15:16.697091  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.906075ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.717714  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.582016ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.718023  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1009 21:15:16.736779  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.69397ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.757297  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.757343  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.757393  110340 httplog.go:90] GET /healthz: (1.338708ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:16.758579  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.440897ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.758866  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1009 21:15:16.772963  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.773004  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.773072  110340 httplog.go:90] GET /healthz: (1.266425ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.776362  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.43698ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.798211  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.016109ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.798543  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1009 21:15:16.817190  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (2.090079ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.838152  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.938566ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.838484  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1009 21:15:16.856731  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.662943ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.857033  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.857063  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.857237  110340 httplog.go:90] GET /healthz: (1.08207ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:16.874517  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.874556  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.874617  110340 httplog.go:90] GET /healthz: (2.861055ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.877660  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.652453ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.878010  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1009 21:15:16.896568  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.381099ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.917624  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.579852ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.918088  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1009 21:15:16.936785  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.662514ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.957595  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.536208ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:16.957671  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.957690  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.957721  110340 httplog.go:90] GET /healthz: (1.532558ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:16.957934  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1009 21:15:16.973130  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:16.973164  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:16.973210  110340 httplog.go:90] GET /healthz: (1.433742ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.976354  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.46953ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.997121  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.147528ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:16.997427  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1009 21:15:17.016813  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.712296ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.037366  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.272757ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.037601  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1009 21:15:17.056708  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.540183ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.056759  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.056787  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.056822  110340 httplog.go:90] GET /healthz: (781.941µs) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:17.073582  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.073617  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.073688  110340 httplog.go:90] GET /healthz: (1.869717ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.078277  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.249135ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.078618  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1009 21:15:17.096705  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.438728ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.117785  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.631192ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.118249  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1009 21:15:17.136737  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.734188ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.158574  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.158615  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.158655  110340 httplog.go:90] GET /healthz: (2.559486ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:17.158750  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.501217ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.159052  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1009 21:15:17.173493  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.173528  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.173571  110340 httplog.go:90] GET /healthz: (1.641812ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.176225  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.356654ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.197945  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.640742ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.198266  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1009 21:15:17.216516  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.541346ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.238498  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.143553ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.239045  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1009 21:15:17.256690  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.678763ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.257220  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.257253  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.257292  110340 httplog.go:90] GET /healthz: (1.183188ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:17.273145  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.273182  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.273232  110340 httplog.go:90] GET /healthz: (1.530194ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.276926  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.014328ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.277306  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1009 21:15:17.297079  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.986713ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.318815  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.760288ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.319116  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1009 21:15:17.336866  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.734196ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.358167  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.358199  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.358237  110340 httplog.go:90] GET /healthz: (1.812478ms) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:17.358643  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.626677ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.358804  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1009 21:15:17.373043  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.373081  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.373128  110340 httplog.go:90] GET /healthz: (1.298408ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.376625  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.563887ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.397258  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.203119ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.397709  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1009 21:15:17.416703  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.688257ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.437449  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.35777ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.437778  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1009 21:15:17.457020  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.457053  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.457093  110340 httplog.go:90] GET /healthz: (1.009188ms) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:17.457528  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.33053ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.473317  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.473561  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.473773  110340 httplog.go:90] GET /healthz: (1.953808ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.477576  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.60947ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.477747  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1009 21:15:17.496771  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.754146ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.518276  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.149424ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.518726  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1009 21:15:17.536478  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.434101ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.557545  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.557718  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.557881  110340 httplog.go:90] GET /healthz: (1.555819ms) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:17.557927  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.599538ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.558291  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1009 21:15:17.573027  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.573063  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.573120  110340 httplog.go:90] GET /healthz: (1.41782ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.576425  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.429323ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.597777  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.516266ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.598545  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1009 21:15:17.616729  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.461579ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.637759  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.790496ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.638062  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1009 21:15:17.656811  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.673472ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:17.657129  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.657156  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.657202  110340 httplog.go:90] GET /healthz: (1.124781ms) 0 [Go-http-client/1.1 127.0.0.1:43356]
I1009 21:15:17.672910  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.672942  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.672988  110340 httplog.go:90] GET /healthz: (1.301181ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.676972  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134279ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.677384  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1009 21:15:17.696674  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.626778ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.717309  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.32446ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.717565  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1009 21:15:17.736520  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.534263ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.757577  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.432138ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.757920  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.757941  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.757976  110340 httplog.go:90] GET /healthz: (1.913086ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:17.758287  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1009 21:15:17.772930  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.772983  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.773028  110340 httplog.go:90] GET /healthz: (1.288161ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.776189  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.359041ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.797059  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.116485ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.797345  110340 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1009 21:15:17.816457  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.357198ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.818306  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.242435ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.839465  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.109321ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.839898  110340 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1009 21:15:17.856500  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.548536ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.857284  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.857315  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.857345  110340 httplog.go:90] GET /healthz: (1.217755ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:17.858986  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.311453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.873080  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.873146  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.873191  110340 httplog.go:90] GET /healthz: (1.457905ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.877450  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.573357ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.877803  110340 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1009 21:15:17.896546  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.595459ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.898600  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.370716ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.917986  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.831308ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.918369  110340 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1009 21:15:17.936710  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.695853ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.939291  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.738368ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.958292  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.958328  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.958394  110340 httplog.go:90] GET /healthz: (2.340704ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:17.959901  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.65848ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.960212  110340 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1009 21:15:17.973429  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:17.973486  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:17.973536  110340 httplog.go:90] GET /healthz: (1.836643ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.976892  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.574502ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.979406  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.046052ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.998284  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.131288ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:17.998660  110340 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1009 21:15:18.017375  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.359968ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.019237  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.333221ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.037343  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.288475ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.037587  110340 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1009 21:15:18.056751  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:18.056781  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:18.056844  110340 httplog.go:90] GET /healthz: (794.722µs) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:18.057286  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.246568ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.059431  110340 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.648101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.073267  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:18.073299  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:18.073344  110340 httplog.go:90] GET /healthz: (1.512197ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.077234  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.255648ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.077478  110340 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1009 21:15:18.096859  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.714376ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.099187  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.720118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.121060  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.049111ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.122458  110340 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1009 21:15:18.136441  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.444959ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.138719  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.600764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.157367  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:18.157535  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:18.157662  110340 httplog.go:90] GET /healthz: (1.62823ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:18.158556  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.538193ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.159089  110340 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1009 21:15:18.172783  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:18.172820  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:18.172875  110340 httplog.go:90] GET /healthz: (1.157232ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.176183  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.188194ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.178140  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.546061ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.197737  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.703761ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.198141  110340 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1009 21:15:18.216830  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.075546ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.219144  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.510319ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.236944  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.959411ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.237348  110340 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1009 21:15:18.257059  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:18.257091  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:18.257153  110340 httplog.go:90] GET /healthz: (1.050042ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:18.259261  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (3.631568ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.267045  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.8867ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.272707  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:18.272753  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:18.272786  110340 httplog.go:90] GET /healthz: (1.107869ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.277910  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.784252ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.278265  110340 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1009 21:15:18.296373  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.373052ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.298688  110340 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.647134ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.319163  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.704664ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.319446  110340 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1009 21:15:18.337309  110340 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.245913ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.339337  110340 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.322691ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.356960  110340 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1009 21:15:18.357255  110340 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1009 21:15:18.357416  110340 httplog.go:90] GET /healthz: (1.322477ms) 0 [Go-http-client/1.1 127.0.0.1:43354]
I1009 21:15:18.357432  110340 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.428014ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.358025  110340 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1009 21:15:18.373152  110340 httplog.go:90] GET /healthz: (1.410438ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.375098  110340 httplog.go:90] GET /api/v1/namespaces/default: (1.512375ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.378559  110340 httplog.go:90] POST /api/v1/namespaces: (2.949923ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.381553  110340 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.025692ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.387218  110340 httplog.go:90] POST /api/v1/namespaces/default/services: (4.53538ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.389017  110340 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (928.294µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.391135  110340 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.696327ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.457470  110340 httplog.go:90] GET /healthz: (1.373382ms) 200 [Go-http-client/1.1 127.0.0.1:43356]
W1009 21:15:18.458187  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458213  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458224  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458233  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458242  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458249  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458258  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458270  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458278  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458290  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.458298  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1009 21:15:18.458342  110340 factory.go:289] Creating scheduler from algorithm provider 'DefaultProvider'
I1009 21:15:18.458356  110340 factory.go:377] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1009 21:15:18.458864  110340 reflector.go:150] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.458911  110340 reflector.go:150] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.458929  110340 reflector.go:185] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459012  110340 reflector.go:150] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459021  110340 reflector.go:185] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459201  110340 reflector.go:150] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459288  110340 reflector.go:150] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459313  110340 reflector.go:185] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459390  110340 reflector.go:150] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459406  110340 reflector.go:185] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.458912  110340 reflector.go:185] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459258  110340 reflector.go:150] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459597  110340 reflector.go:185] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459299  110340 reflector.go:185] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459769  110340 reflector.go:150] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459787  110340 reflector.go:185] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459876  110340 reflector.go:150] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.459892  110340 reflector.go:185] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.460050  110340 reflector.go:150] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.460064  110340 reflector.go:185] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.460408  110340 reflector.go:150] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.460484  110340 reflector.go:185] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.461247  110340 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (553.9µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:18.461289  110340 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (457.328µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43546]
I1009 21:15:18.461316  110340 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (622.655µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:18.461400  110340 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (398.154µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43562]
I1009 21:15:18.462003  110340 get.go:251] Starting watch for /api/v1/nodes, rv=57853 labels= fields= timeout=5m34s
I1009 21:15:18.461747  110340 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (353.743µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43548]
I1009 21:15:18.462005  110340 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=57853 labels= fields= timeout=6m28s
I1009 21:15:18.462245  110340 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (425.19µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43550]
I1009 21:15:18.462253  110340 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=57853 labels= fields= timeout=5m10s
I1009 21:15:18.462263  110340 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=57853 labels= fields= timeout=9m37s
I1009 21:15:18.462661  110340 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (318.515µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43552]
I1009 21:15:18.462745  110340 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=57853 labels= fields= timeout=5m38s
I1009 21:15:18.462998  110340 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (460.547µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43554]
I1009 21:15:18.463075  110340 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (399.56µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43560]
I1009 21:15:18.463162  110340 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=57853 labels= fields= timeout=8m59s
I1009 21:15:18.463207  110340 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=57853 labels= fields= timeout=9m34s
I1009 21:15:18.463589  110340 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (419.001µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43556]
I1009 21:15:18.463638  110340 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=57853 labels= fields= timeout=8m50s
I1009 21:15:18.463805  110340 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (329.953µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43558]
I1009 21:15:18.463814  110340 get.go:251] Starting watch for /api/v1/pods, rv=57853 labels= fields= timeout=6m37s
I1009 21:15:18.464377  110340 get.go:251] Starting watch for /api/v1/services, rv=58102 labels= fields= timeout=7m44s
I1009 21:15:18.464701  110340 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=57853 labels= fields= timeout=5m21s
I1009 21:15:18.558748  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558798  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558803  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558807  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558811  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558814  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558818  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558822  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558826  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558843  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558847  110340 shared_informer.go:227] caches populated
I1009 21:15:18.558854  110340 shared_informer.go:227] caches populated
I1009 21:15:18.559048  110340 plugins.go:630] Loaded volume plugin "kubernetes.io/mock-provisioner"
W1009 21:15:18.559084  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.559121  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.559138  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.559146  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1009 21:15:18.559155  110340 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1009 21:15:18.559217  110340 pv_controller_base.go:289] Starting persistent volume controller
I1009 21:15:18.559255  110340 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1009 21:15:18.559418  110340 reflector.go:150] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.559431  110340 reflector.go:185] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.559482  110340 reflector.go:150] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.559500  110340 reflector.go:185] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.559924  110340 reflector.go:150] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.559947  110340 reflector.go:185] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.560080  110340 reflector.go:150] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.560092  110340 reflector.go:185] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.559935  110340 reflector.go:150] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.560191  110340 reflector.go:185] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I1009 21:15:18.561595  110340 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (500.359µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43578]
I1009 21:15:18.561740  110340 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (646.864µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43576]
I1009 21:15:18.562141  110340 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (428.306µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I1009 21:15:18.562234  110340 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (377.151µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43574]
I1009 21:15:18.562796  110340 get.go:251] Starting watch for /api/v1/nodes, rv=57853 labels= fields= timeout=9m34s
I1009 21:15:18.562900  110340 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (369.734µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I1009 21:15:18.562912  110340 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=57853 labels= fields= timeout=6m18s
I1009 21:15:18.563295  110340 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=57853 labels= fields= timeout=7m36s
I1009 21:15:18.563372  110340 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=57853 labels= fields= timeout=9m5s
I1009 21:15:18.563584  110340 get.go:251] Starting watch for /api/v1/pods, rv=57853 labels= fields= timeout=5m50s
I1009 21:15:18.659396  110340 shared_informer.go:227] caches populated
I1009 21:15:18.659437  110340 shared_informer.go:204] Caches are synced for persistent volume 
I1009 21:15:18.659459  110340 pv_controller_base.go:160] controller initialized
I1009 21:15:18.659396  110340 shared_informer.go:227] caches populated
I1009 21:15:18.659486  110340 shared_informer.go:227] caches populated
I1009 21:15:18.659507  110340 shared_informer.go:227] caches populated
I1009 21:15:18.659512  110340 shared_informer.go:227] caches populated
I1009 21:15:18.659517  110340 shared_informer.go:227] caches populated
I1009 21:15:18.659594  110340 pv_controller_base.go:426] resyncing PV controller
I1009 21:15:18.662539  110340 httplog.go:90] POST /api/v1/nodes: (2.500019ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.663350  110340 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I1009 21:15:18.664798  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.781607ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.666974  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.685067ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.667337  110340 volume_binding_test.go:739] Running test wait provisioned
I1009 21:15:18.669020  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.460391ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.671089  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.681073ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.672812  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.314855ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.674951  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (1.765828ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.675418  110340 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision", version 58115
I1009 21:15:18.675444  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:18.675468  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:18.675492  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Pending
I1009 21:15:18.675508  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: phase Pending already set
I1009 21:15:18.675582  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-canprovision", UID:"eb141a7b-9b39-4575-926c-62b2176f0eab", APIVersion:"v1", ResourceVersion:"58115", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1009 21:15:18.677733  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (2.153954ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.677974  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.677135ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:18.678277  110340 scheduling_queue.go:883] About to try and schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision
I1009 21:15:18.678302  110340 scheduler.go:598] Attempting to schedule pod: volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision
I1009 21:15:18.678489  110340 scheduler_binder.go:686] No matching volumes for Pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision", PVC "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" on node "node-1"
I1009 21:15:18.678518  110340 scheduler_binder.go:741] Provisioning for claims of pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision" that has no matching volumes on node "node-1" ...
I1009 21:15:18.678697  110340 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision", node "node-1"
I1009 21:15:18.678747  110340 scheduler_assume_cache.go:323] Assumed v1.PersistentVolumeClaim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision", version 58115
I1009 21:15:18.678806  110340 scheduler_binder.go:332] BindPodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision", node "node-1"
I1009 21:15:18.680903  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.640476ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:18.681193  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58118
I1009 21:15:18.681217  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:18.681230  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:18.681237  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: started
I1009 21:15:18.681251  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[eb141a7b-9b39-4575-926c-62b2176f0eab]]
I1009 21:15:18.681293  110340 pv_controller.go:1367] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] started, class: "wait-6kwf"
I1009 21:15:18.683226  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.688641ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.683473  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58119
I1009 21:15:18.684118  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58119
I1009 21:15:18.684151  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:18.684174  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:18.684184  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: started
I1009 21:15:18.684199  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[eb141a7b-9b39-4575-926c-62b2176f0eab]]
I1009 21:15:18.684206  110340 pv_controller.go:1637] operation "provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[eb141a7b-9b39-4575-926c-62b2176f0eab]" is already running, skipping
I1009 21:15:18.685201  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-eb141a7b-9b39-4575-926c-62b2176f0eab: (1.19575ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.685429  110340 pv_controller.go:1471] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" created
I1009 21:15:18.685450  110340 pv_controller.go:1488] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: trying to save volume pvc-eb141a7b-9b39-4575-926c-62b2176f0eab
I1009 21:15:18.687323  110340 httplog.go:90] POST /api/v1/persistentvolumes: (1.635548ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.687543  110340 pv_controller.go:1496] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" saved
I1009 21:15:18.687577  110340 pv_controller_base.go:509] storeObjectUpdate: adding volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab", version 58120
I1009 21:15:18.687606  110340 pv_controller.go:1549] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" provisioned for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.687785  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" with version 58120
I1009 21:15:18.687792  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-canprovision", UID:"eb141a7b-9b39-4575-926c-62b2176f0eab", APIVersion:"v1", ResourceVersion:"58119", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-eb141a7b-9b39-4575-926c-62b2176f0eab using kubernetes.io/mock-provisioner
I1009 21:15:18.687932  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: eb141a7b-9b39-4575-926c-62b2176f0eab)", boundByController: true
I1009 21:15:18.687959  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:18.687977  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:18.687990  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:18.688025  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58119
I1009 21:15:18.688038  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:18.688063  110340 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" found: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: eb141a7b-9b39-4575-926c-62b2176f0eab)", boundByController: true
I1009 21:15:18.688080  110340 pv_controller.go:933] binding volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.688099  110340 pv_controller.go:831] updating PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.688114  110340 pv_controller.go:843] updating PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.688149  110340 pv_controller.go:779] updating PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: set phase Bound
I1009 21:15:18.689517  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.600569ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:18.690380  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-eb141a7b-9b39-4575-926c-62b2176f0eab/status: (1.943375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:18.690617  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" with version 58122
I1009 21:15:18.690642  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" with version 58122
I1009 21:15:18.690653  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: eb141a7b-9b39-4575-926c-62b2176f0eab)", boundByController: true
I1009 21:15:18.690672  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:18.690673  110340 pv_controller.go:800] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" entered phase "Bound"
I1009 21:15:18.690692  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:18.690690  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: binding to "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab"
I1009 21:15:18.690707  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:18.690714  110340 pv_controller.go:903] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.694249  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (3.077133ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:18.694770  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58123
I1009 21:15:18.694810  110340 pv_controller.go:914] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: bound to "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab"
I1009 21:15:18.694821  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Bound
I1009 21:15:18.697817  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision/status: (2.66206ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:18.698256  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58124
I1009 21:15:18.698383  110340 pv_controller.go:744] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" entered phase "Bound"
I1009 21:15:18.698400  110340 pv_controller.go:959] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.698429  110340 pv_controller.go:960] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: eb141a7b-9b39-4575-926c-62b2176f0eab)", boundByController: true
I1009 21:15:18.698447  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab", bindCompleted: true, boundByController: true
I1009 21:15:18.698486  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58124
I1009 21:15:18.698506  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Bound, bound to: "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab", bindCompleted: true, boundByController: true
I1009 21:15:18.698534  110340 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" found: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: eb141a7b-9b39-4575-926c-62b2176f0eab)", boundByController: true
I1009 21:15:18.698546  110340 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: claim is already correctly bound
I1009 21:15:18.698555  110340 pv_controller.go:933] binding volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.698563  110340 pv_controller.go:831] updating PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.698582  110340 pv_controller.go:843] updating PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.698591  110340 pv_controller.go:779] updating PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: set phase Bound
I1009 21:15:18.698608  110340 pv_controller.go:782] updating PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: phase Bound already set
I1009 21:15:18.698617  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: binding to "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab"
I1009 21:15:18.698659  110340 pv_controller.go:918] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: already bound to "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab"
I1009 21:15:18.698669  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Bound
I1009 21:15:18.698688  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: phase Bound already set
I1009 21:15:18.698705  110340 pv_controller.go:959] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:18.698724  110340 pv_controller.go:960] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: eb141a7b-9b39-4575-926c-62b2176f0eab)", boundByController: true
I1009 21:15:18.698739  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab", bindCompleted: true, boundByController: true
I1009 21:15:18.781077  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (2.596036ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:18.880312  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.77468ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:18.980367  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.890109ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.080187  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.773872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.180298  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.802552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.280063  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.631666ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.380257  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.781165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.458710  110340 cache.go:669] Couldn't expire cache for pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision. Binding is still in progress.
I1009 21:15:19.480205  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.6751ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.580056  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.564524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.680061  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.63826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.681903  110340 scheduler_binder.go:553] All PVCs for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision" are bound
I1009 21:15:19.681973  110340 factory.go:710] Attempting to bind pod-pvc-canprovision to node-1
I1009 21:15:19.684234  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision/binding: (1.942118ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.684453  110340 scheduler.go:730] pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1009 21:15:19.686872  110340 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.973096ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.780343  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canprovision: (1.691315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.782179  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.335946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.788622  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (5.923548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.793603  110340 pv_controller_base.go:265] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" deleted
I1009 21:15:19.793646  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" with version 58122
I1009 21:15:19.793681  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: eb141a7b-9b39-4575-926c-62b2176f0eab)", boundByController: true
I1009 21:15:19.793694  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:19.794039  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (4.44963ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.795199  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.193436ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:19.795427  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision not found
I1009 21:15:19.795445  110340 pv_controller.go:577] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" is released and reclaim policy "Delete" will be executed
I1009 21:15:19.795455  110340 pv_controller.go:779] updating PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: set phase Released
I1009 21:15:19.797565  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-eb141a7b-9b39-4575-926c-62b2176f0eab/status: (1.917728ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:19.798139  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" with version 58206
I1009 21:15:19.798166  110340 pv_controller.go:800] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" entered phase "Released"
I1009 21:15:19.798175  110340 pv_controller.go:1024] reclaimVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: policy is Delete
I1009 21:15:19.798193  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-eb141a7b-9b39-4575-926c-62b2176f0eab[2351b46a-46f2-4acc-8a11-bd7ecb7178b2]]
I1009 21:15:19.798213  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" with version 58206
I1009 21:15:19.798237  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: phase: Released, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: eb141a7b-9b39-4575-926c-62b2176f0eab)", boundByController: true
I1009 21:15:19.798249  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:19.798265  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision not found
I1009 21:15:19.798269  110340 pv_controller.go:1024] reclaimVolume[pvc-eb141a7b-9b39-4575-926c-62b2176f0eab]: policy is Delete
I1009 21:15:19.798275  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-eb141a7b-9b39-4575-926c-62b2176f0eab[2351b46a-46f2-4acc-8a11-bd7ecb7178b2]]
I1009 21:15:19.798280  110340 pv_controller.go:1637] operation "delete-pvc-eb141a7b-9b39-4575-926c-62b2176f0eab[2351b46a-46f2-4acc-8a11-bd7ecb7178b2]" is already running, skipping
I1009 21:15:19.798300  110340 pv_controller.go:1148] deleteVolumeOperation [pvc-eb141a7b-9b39-4575-926c-62b2176f0eab] started
I1009 21:15:19.799553  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-eb141a7b-9b39-4575-926c-62b2176f0eab: (1.044718ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:19.799906  110340 pv_controller_base.go:216] volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" deleted
I1009 21:15:19.799940  110340 pv_controller.go:1155] error reading persistent volume "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab": persistentvolumes "pvc-eb141a7b-9b39-4575-926c-62b2176f0eab" not found
I1009 21:15:19.799953  110340 pv_controller_base.go:403] deletion of claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" was already processed
I1009 21:15:19.800866  110340 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.471838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.816289  110340 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (15.090583ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.816627  110340 volume_binding_test.go:739] Running test topolgy unsatisfied
I1009 21:15:19.819022  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.180037ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.821136  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.720177ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.823018  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.494084ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.825514  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (1.964372ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.826021  110340 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-topomismatch", version 58217
I1009 21:15:19.826057  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-topomismatch]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:19.826080  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-topomismatch]: no volume found
I1009 21:15:19.826104  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-topomismatch] status: set phase Pending
I1009 21:15:19.826161  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-topomismatch] status: phase Pending already set
I1009 21:15:19.826270  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-topomismatch", UID:"906b1efb-08ec-48c4-b3bb-a4df40bb29f4", APIVersion:"v1", ResourceVersion:"58217", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1009 21:15:19.828457  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.98611ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:19.828699  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (2.228837ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.828964  110340 scheduling_queue.go:883] About to try and schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-topomismatch
I1009 21:15:19.828989  110340 scheduler.go:598] Attempting to schedule pod: volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-topomismatch
I1009 21:15:19.829131  110340 scheduler_binder.go:686] No matching volumes for Pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-topomismatch", PVC "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-topomismatch" on node "node-1"
I1009 21:15:19.829227  110340 scheduler_binder.go:731] Node "node-1" cannot satisfy provisioning topology requirements of claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-topomismatch"
I1009 21:15:19.829337  110340 factory.go:645] Unable to schedule volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-topomismatch: no fit: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.; waiting
I1009 21:15:19.829412  110340 scheduler.go:746] Updating pod condition for volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-topomismatch to (PodScheduled==False, Reason=Unschedulable)
I1009 21:15:19.831658  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-topomismatch/status: (1.823434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:19.832191  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-topomismatch: (2.391592ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.833178  110340 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (2.270833ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.833971  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-topomismatch: (1.698615ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I1009 21:15:19.834252  110340 generic_scheduler.go:325] Preemption will not help schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-topomismatch on any node.
I1009 21:15:19.931524  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-topomismatch: (1.990456ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.933736  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-topomismatch: (1.448829ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.938727  110340 scheduling_queue.go:883] About to try and schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-topomismatch
I1009 21:15:19.938775  110340 scheduler.go:594] Skip schedule deleting pod: volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-topomismatch
I1009 21:15:19.940526  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (6.166922ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.940714  110340 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.597351ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.945486  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (4.411327ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.947022  110340 pv_controller_base.go:265] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-topomismatch" deleted
I1009 21:15:19.948117  110340 httplog.go:90] DELETE /api/v1/persistentvolumes: (2.125025ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.960256  110340 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (11.665864ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.960584  110340 volume_binding_test.go:739] Running test wait one bound, one provisioned
I1009 21:15:19.962498  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.683459ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.964704  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.780743ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.966943  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.659833ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.971044  110340 httplog.go:90] POST /api/v1/persistentvolumes: (3.393506ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.972414  110340 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind", version 58236
I1009 21:15:19.972534  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I1009 21:15:19.972641  110340 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1009 21:15:19.972695  110340 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Available
I1009 21:15:19.975016  110340 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind", version 58237
I1009 21:15:19.975056  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:19.975085  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: no volume found
I1009 21:15:19.975109  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind] status: set phase Pending
I1009 21:15:19.975123  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind] status: phase Pending already set
I1009 21:15:19.975161  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (3.445943ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.975423  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-w-canbind", UID:"90a02836-7cba-4472-ba06-4f2742492615", APIVersion:"v1", ResourceVersion:"58237", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1009 21:15:19.977548  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (1.637882ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:19.978514  110340 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision", version 58238
I1009 21:15:19.978543  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:19.978570  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:19.978589  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Pending
I1009 21:15:19.978604  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: phase Pending already set
I1009 21:15:19.978625  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-canprovision", UID:"b60112d0-5a9f-4d7e-8117-6db97f2c3187", APIVersion:"v1", ResourceVersion:"58238", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1009 21:15:19.980425  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (2.444187ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:19.980844  110340 scheduling_queue.go:883] About to try and schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision
I1009 21:15:19.980862  110340 scheduler.go:598] Attempting to schedule pod: volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision
I1009 21:15:19.981074  110340 scheduler_binder.go:686] No matching volumes for Pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision", PVC "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" on node "node-1"
I1009 21:15:19.981099  110340 scheduler_binder.go:686] No matching volumes for Pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision", PVC "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" on node "node-1"
I1009 21:15:19.981123  110340 scheduler_binder.go:741] Provisioning for claims of pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision" that has no matching volumes on node "node-1" ...
I1009 21:15:19.981183  110340 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision", node "node-1"
I1009 21:15:19.981207  110340 scheduler_assume_cache.go:323] Assumed v1.PersistentVolumeClaim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind", version 58237
I1009 21:15:19.981220  110340 scheduler_assume_cache.go:323] Assumed v1.PersistentVolumeClaim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision", version 58238
I1009 21:15:19.981272  110340 scheduler_binder.go:332] BindPodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision", node "node-1"
I1009 21:15:19.983468  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" with version 58240
I1009 21:15:19.983498  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:19.983526  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: no volume found
I1009 21:15:19.983536  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: started
I1009 21:15:19.983553  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind[90a02836-7cba-4472-ba06-4f2742492615]]
I1009 21:15:19.983602  110340 pv_controller.go:1367] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind] started, class: "wait-dsdl"
I1009 21:15:19.983999  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (7.008837ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.984145  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-w-canbind: (2.612465ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:19.984361  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (11.160771ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.985416  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 58242
I1009 21:15:19.985443  110340 pv_controller.go:800] volume "pv-w-canbind" entered phase "Available"
I1009 21:15:19.985695  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 58242
I1009 21:15:19.985724  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I1009 21:15:19.985743  110340 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1009 21:15:19.985750  110340 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Available
I1009 21:15:19.985760  110340 pv_controller.go:782] updating PersistentVolume[pv-w-canbind]: phase Available already set
I1009 21:15:19.989962  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (5.165826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1009 21:15:19.990398  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" with version 58243
I1009 21:15:19.990430  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:19.990454  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: no volume found
I1009 21:15:19.990461  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: started
I1009 21:15:19.990477  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind[90a02836-7cba-4472-ba06-4f2742492615]]
I1009 21:15:19.990484  110340 pv_controller.go:1637] operation "provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind[90a02836-7cba-4472-ba06-4f2742492615]" is already running, skipping
I1009 21:15:19.990782  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58245
I1009 21:15:19.990992  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:19.991040  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:19.991241  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: started
I1009 21:15:19.991301  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[b60112d0-5a9f-4d7e-8117-6db97f2c3187]]
I1009 21:15:19.991368  110340 pv_controller.go:1367] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] started, class: "wait-dsdl"
I1009 21:15:19.992827  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-w-canbind: (8.127501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:19.994485  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58246
I1009 21:15:19.994587  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:19.994652  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:19.994758  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: started
I1009 21:15:19.994855  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[b60112d0-5a9f-4d7e-8117-6db97f2c3187]]
I1009 21:15:19.994943  110340 pv_controller.go:1637] operation "provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[b60112d0-5a9f-4d7e-8117-6db97f2c3187]" is already running, skipping
I1009 21:15:19.993969  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (9.224661ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43636]
I1009 21:15:19.994084  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (2.448109ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I1009 21:15:19.995372  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58246
I1009 21:15:19.995097  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" with version 58243
I1009 21:15:19.996927  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187: (1.37037ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:19.997215  110340 pv_controller.go:1471] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" created
I1009 21:15:19.997307  110340 pv_controller.go:1488] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: trying to save volume pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187
I1009 21:15:19.997804  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-90a02836-7cba-4472-ba06-4f2742492615: (1.338803ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:19.998078  110340 pv_controller.go:1471] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" created
I1009 21:15:19.998104  110340 pv_controller.go:1488] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: trying to save volume pvc-90a02836-7cba-4472-ba06-4f2742492615
I1009 21:15:20.000318  110340 httplog.go:90] POST /api/v1/persistentvolumes: (1.604054ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:20.000596  110340 pv_controller.go:1496] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" saved
I1009 21:15:20.000634  110340 pv_controller_base.go:509] storeObjectUpdate: adding volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187", version 58248
I1009 21:15:20.000654  110340 pv_controller.go:1549] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" provisioned for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.000672  110340 pv_controller_base.go:509] storeObjectUpdate: adding volume "pvc-90a02836-7cba-4472-ba06-4f2742492615", version 58247
I1009 21:15:20.000707  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind (uid: 90a02836-7cba-4472-ba06-4f2742492615)", boundByController: true
I1009 21:15:20.000773  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind
I1009 21:15:20.000795  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:20.000809  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:20.000850  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" with version 58248
I1009 21:15:20.000873  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: b60112d0-5a9f-4d7e-8117-6db97f2c3187)", boundByController: true
I1009 21:15:20.000883  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:20.000897  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:20.000909  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:20.000952  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" with version 58243
I1009 21:15:20.000739  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-canprovision", UID:"b60112d0-5a9f-4d7e-8117-6db97f2c3187", APIVersion:"v1", ResourceVersion:"58246", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187 using kubernetes.io/mock-provisioner
I1009 21:15:20.000975  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:20.001009  110340 httplog.go:90] POST /api/v1/persistentvolumes: (2.702164ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.001030  110340 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" found: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind (uid: 90a02836-7cba-4472-ba06-4f2742492615)", boundByController: true
I1009 21:15:20.001051  110340 pv_controller.go:933] binding volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.001065  110340 pv_controller.go:831] updating PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.001081  110340 pv_controller.go:843] updating PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.001091  110340 pv_controller.go:779] updating PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: set phase Bound
I1009 21:15:20.001295  110340 pv_controller.go:1496] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" saved
I1009 21:15:20.001325  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" with version 58247
I1009 21:15:20.001461  110340 pv_controller.go:1549] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" provisioned for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.001507  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-w-canbind", UID:"90a02836-7cba-4472-ba06-4f2742492615", APIVersion:"v1", ResourceVersion:"58243", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-90a02836-7cba-4472-ba06-4f2742492615 using kubernetes.io/mock-provisioner
I1009 21:15:20.002618  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.57509ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:20.003337  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-90a02836-7cba-4472-ba06-4f2742492615/status: (2.00702ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.003512  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" with version 58250
I1009 21:15:20.003714  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind (uid: 90a02836-7cba-4472-ba06-4f2742492615)", boundByController: true
I1009 21:15:20.003737  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind
I1009 21:15:20.003759  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:20.003774  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:20.003783  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" with version 58250
I1009 21:15:20.003801  110340 pv_controller.go:800] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" entered phase "Bound"
I1009 21:15:20.003811  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: binding to "pvc-90a02836-7cba-4472-ba06-4f2742492615"
I1009 21:15:20.003827  110340 pv_controller.go:903] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.004672  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.537895ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:20.006205  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-w-canbind: (1.789898ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.006634  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" with version 58252
I1009 21:15:20.006668  110340 pv_controller.go:914] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: bound to "pvc-90a02836-7cba-4472-ba06-4f2742492615"
I1009 21:15:20.006681  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind] status: set phase Bound
I1009 21:15:20.008972  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-w-canbind/status: (1.846532ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.009192  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" with version 58254
I1009 21:15:20.009217  110340 pv_controller.go:744] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" entered phase "Bound"
I1009 21:15:20.009234  110340 pv_controller.go:959] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.009257  110340 pv_controller.go:960] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind (uid: 90a02836-7cba-4472-ba06-4f2742492615)", boundByController: true
I1009 21:15:20.009274  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-90a02836-7cba-4472-ba06-4f2742492615", bindCompleted: true, boundByController: true
I1009 21:15:20.009319  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58246
I1009 21:15:20.009333  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:20.009464  110340 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" found: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: b60112d0-5a9f-4d7e-8117-6db97f2c3187)", boundByController: true
I1009 21:15:20.009487  110340 pv_controller.go:933] binding volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.009500  110340 pv_controller.go:831] updating PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.009544  110340 pv_controller.go:843] updating PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.009559  110340 pv_controller.go:779] updating PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: set phase Bound
I1009 21:15:20.011910  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187/status: (1.860768ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.012374  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" with version 58255
I1009 21:15:20.012412  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: b60112d0-5a9f-4d7e-8117-6db97f2c3187)", boundByController: true
I1009 21:15:20.012425  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:20.012444  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:20.012459  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:20.012786  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" with version 58255
I1009 21:15:20.012913  110340 pv_controller.go:800] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" entered phase "Bound"
I1009 21:15:20.013111  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: binding to "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187"
I1009 21:15:20.013263  110340 pv_controller.go:903] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.015709  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (2.117147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.016181  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58256
I1009 21:15:20.016327  110340 pv_controller.go:914] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: bound to "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187"
I1009 21:15:20.016557  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Bound
I1009 21:15:20.019100  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision/status: (1.94211ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.019504  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58257
I1009 21:15:20.019637  110340 pv_controller.go:744] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" entered phase "Bound"
I1009 21:15:20.019695  110340 pv_controller.go:959] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.019750  110340 pv_controller.go:960] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: b60112d0-5a9f-4d7e-8117-6db97f2c3187)", boundByController: true
I1009 21:15:20.019799  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187", bindCompleted: true, boundByController: true
I1009 21:15:20.020017  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" with version 58254
I1009 21:15:20.020105  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: phase: Bound, bound to: "pvc-90a02836-7cba-4472-ba06-4f2742492615", bindCompleted: true, boundByController: true
I1009 21:15:20.020129  110340 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" found: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind (uid: 90a02836-7cba-4472-ba06-4f2742492615)", boundByController: true
I1009 21:15:20.020140  110340 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: claim is already correctly bound
I1009 21:15:20.020151  110340 pv_controller.go:933] binding volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.020163  110340 pv_controller.go:831] updating PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.020184  110340 pv_controller.go:843] updating PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.020194  110340 pv_controller.go:779] updating PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: set phase Bound
I1009 21:15:20.020288  110340 pv_controller.go:782] updating PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: phase Bound already set
I1009 21:15:20.020381  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: binding to "pvc-90a02836-7cba-4472-ba06-4f2742492615"
I1009 21:15:20.020474  110340 pv_controller.go:918] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind]: already bound to "pvc-90a02836-7cba-4472-ba06-4f2742492615"
I1009 21:15:20.020556  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind] status: set phase Bound
I1009 21:15:20.020749  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind] status: phase Bound already set
I1009 21:15:20.020968  110340 pv_controller.go:959] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind"
I1009 21:15:20.021101  110340 pv_controller.go:960] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind (uid: 90a02836-7cba-4472-ba06-4f2742492615)", boundByController: true
I1009 21:15:20.021193  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-90a02836-7cba-4472-ba06-4f2742492615", bindCompleted: true, boundByController: true
I1009 21:15:20.021293  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58257
I1009 21:15:20.021362  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Bound, bound to: "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187", bindCompleted: true, boundByController: true
I1009 21:15:20.021470  110340 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" found: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: b60112d0-5a9f-4d7e-8117-6db97f2c3187)", boundByController: true
I1009 21:15:20.021539  110340 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: claim is already correctly bound
I1009 21:15:20.021614  110340 pv_controller.go:933] binding volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.021738  110340 pv_controller.go:831] updating PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.021889  110340 pv_controller.go:843] updating PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.021969  110340 pv_controller.go:779] updating PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: set phase Bound
I1009 21:15:20.022026  110340 pv_controller.go:782] updating PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: phase Bound already set
I1009 21:15:20.022140  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: binding to "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187"
I1009 21:15:20.022221  110340 pv_controller.go:918] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: already bound to "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187"
I1009 21:15:20.022330  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Bound
I1009 21:15:20.022413  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: phase Bound already set
I1009 21:15:20.022494  110340 pv_controller.go:959] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:20.022606  110340 pv_controller.go:960] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: b60112d0-5a9f-4d7e-8117-6db97f2c3187)", boundByController: true
I1009 21:15:20.022677  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187", bindCompleted: true, boundByController: true
I1009 21:15:20.083440  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (1.927788ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.183994  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (2.386743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.288290  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (6.00247ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.383568  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (2.061608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.458961  110340 cache.go:669] Couldn't expire cache for pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision. Binding is still in progress.
I1009 21:15:20.483562  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (2.06916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.583419  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (2.069135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.682965  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (1.613405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.783160  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (1.757964ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.883388  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (1.978304ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.983508  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (2.050515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.990525  110340 scheduler_binder.go:553] All PVCs for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision" are bound
I1009 21:15:20.990612  110340 factory.go:710] Attempting to bind pod-pvc-canbind-or-provision to node-1
I1009 21:15:20.994471  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision/binding: (3.448723ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:20.995005  110340 scheduler.go:730] pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canbind-or-provision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1009 21:15:20.998030  110340 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (2.374597ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:21.083283  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-pvc-canbind-or-provision: (1.88804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:21.085294  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-w-canbind: (1.36188ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:21.087384  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.619276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:21.089571  110340 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.496695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:21.097689  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (7.49854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:21.103079  110340 pv_controller_base.go:265] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" deleted
I1009 21:15:21.103370  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" with version 58255
I1009 21:15:21.103475  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: b60112d0-5a9f-4d7e-8117-6db97f2c3187)", boundByController: true
I1009 21:15:21.103541  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:21.105621  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.780894ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.106074  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision not found
I1009 21:15:21.106108  110340 pv_controller.go:577] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" is released and reclaim policy "Delete" will be executed
I1009 21:15:21.106122  110340 pv_controller.go:779] updating PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: set phase Released
I1009 21:15:21.106125  110340 pv_controller_base.go:265] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" deleted
I1009 21:15:21.105983  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (7.866116ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:21.109442  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187/status: (3.063677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.109639  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" with version 58305
I1009 21:15:21.109665  110340 pv_controller.go:800] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" entered phase "Released"
I1009 21:15:21.109679  110340 pv_controller.go:1024] reclaimVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: policy is Delete
I1009 21:15:21.109701  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187[15f5365f-e983-4bdb-bab0-dd3f62c2bd39]]
I1009 21:15:21.109726  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" with version 58250
I1009 21:15:21.109753  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind (uid: 90a02836-7cba-4472-ba06-4f2742492615)", boundByController: true
I1009 21:15:21.109763  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind
I1009 21:15:21.109863  110340 pv_controller.go:1148] deleteVolumeOperation [pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187] started
I1009 21:15:21.111779  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-w-canbind: (1.823946ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.112074  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187: (1.803318ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I1009 21:15:21.112306  110340 pv_controller.go:1252] isVolumeReleased[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: volume is released
I1009 21:15:21.112323  110340 pv_controller.go:1287] doDeleteVolume [pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]
I1009 21:15:21.112419  110340 pv_controller.go:1318] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" deleted
I1009 21:15:21.112431  110340 pv_controller.go:1195] deleteVolumeOperation [pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: success
I1009 21:15:21.112653  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind not found
I1009 21:15:21.112672  110340 pv_controller.go:577] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" is released and reclaim policy "Delete" will be executed
I1009 21:15:21.112689  110340 pv_controller.go:779] updating PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: set phase Released
I1009 21:15:21.114873  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-90a02836-7cba-4472-ba06-4f2742492615/status: (1.988178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.115095  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" with version 58307
I1009 21:15:21.115127  110340 pv_controller.go:800] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" entered phase "Released"
I1009 21:15:21.115139  110340 pv_controller.go:1024] reclaimVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: policy is Delete
I1009 21:15:21.115158  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-90a02836-7cba-4472-ba06-4f2742492615[20ed5d69-1f07-4806-ae23-8a56b5f126f4]]
I1009 21:15:21.115189  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" with version 58305
I1009 21:15:21.115215  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: phase: Released, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: b60112d0-5a9f-4d7e-8117-6db97f2c3187)", boundByController: true
I1009 21:15:21.115245  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:21.115266  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision not found
I1009 21:15:21.115273  110340 pv_controller.go:1024] reclaimVolume[pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187]: policy is Delete
I1009 21:15:21.115283  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187[15f5365f-e983-4bdb-bab0-dd3f62c2bd39]]
I1009 21:15:21.115290  110340 pv_controller.go:1637] operation "delete-pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187[15f5365f-e983-4bdb-bab0-dd3f62c2bd39]" is already running, skipping
I1009 21:15:21.115312  110340 pv_controller_base.go:216] volume "pv-w-canbind" deleted
I1009 21:15:21.115327  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" with version 58307
I1009 21:15:21.115347  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: phase: Released, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind (uid: 90a02836-7cba-4472-ba06-4f2742492615)", boundByController: true
I1009 21:15:21.115357  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind
I1009 21:15:21.115377  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind not found
I1009 21:15:21.115383  110340 pv_controller.go:1024] reclaimVolume[pvc-90a02836-7cba-4472-ba06-4f2742492615]: policy is Delete
I1009 21:15:21.115392  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-90a02836-7cba-4472-ba06-4f2742492615[20ed5d69-1f07-4806-ae23-8a56b5f126f4]]
I1009 21:15:21.115397  110340 pv_controller.go:1637] operation "delete-pvc-90a02836-7cba-4472-ba06-4f2742492615[20ed5d69-1f07-4806-ae23-8a56b5f126f4]" is already running, skipping
I1009 21:15:21.115447  110340 pv_controller.go:1148] deleteVolumeOperation [pvc-90a02836-7cba-4472-ba06-4f2742492615] started
I1009 21:15:21.117700  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-90a02836-7cba-4472-ba06-4f2742492615: (1.652124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.118197  110340 pv_controller.go:1252] isVolumeReleased[pvc-90a02836-7cba-4472-ba06-4f2742492615]: volume is released
I1009 21:15:21.118228  110340 pv_controller.go:1287] doDeleteVolume [pvc-90a02836-7cba-4472-ba06-4f2742492615]
I1009 21:15:21.118256  110340 pv_controller.go:1318] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" deleted
I1009 21:15:21.118266  110340 pv_controller.go:1195] deleteVolumeOperation [pvc-90a02836-7cba-4472-ba06-4f2742492615]: success
I1009 21:15:21.118351  110340 pv_controller_base.go:216] volume "pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187" deleted
I1009 21:15:21.118414  110340 pv_controller_base.go:403] deletion of claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" was already processed
I1009 21:15:21.118365  110340 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-b60112d0-5a9f-4d7e-8117-6db97f2c3187: (5.65498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I1009 21:15:21.119530  110340 pv_controller_base.go:216] volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" deleted
I1009 21:15:21.119580  110340 pv_controller_base.go:403] deletion of claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind" was already processed
I1009 21:15:21.120337  110340 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-90a02836-7cba-4472-ba06-4f2742492615: (1.918211ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.120531  110340 pv_controller.go:1202] failed to delete volume "pvc-90a02836-7cba-4472-ba06-4f2742492615" from database: persistentvolumes "pvc-90a02836-7cba-4472-ba06-4f2742492615" not found
I1009 21:15:21.121254  110340 httplog.go:90] DELETE /api/v1/persistentvolumes: (14.650807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I1009 21:15:21.135062  110340 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (13.439315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.135421  110340 volume_binding_test.go:739] Running test one immediate pv prebound, one wait provisioned
I1009 21:15:21.138933  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.1442ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.141645  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.268067ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.144112  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.938724ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.147588  110340 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-prebound", version 58317
I1009 21:15:21.147642  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: )", boundByController: false
I1009 21:15:21.147657  110340 pv_controller.go:508] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound
I1009 21:15:21.147665  110340 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Available
I1009 21:15:21.147792  110340 httplog.go:90] POST /api/v1/persistentvolumes: (3.015489ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.150609  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.547423ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I1009 21:15:21.151282  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (2.740103ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.151314  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 58318
I1009 21:15:21.151338  110340 pv_controller.go:800] volume "pv-i-prebound" entered phase "Available"
I1009 21:15:21.151362  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 58318
I1009 21:15:21.151379  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: )", boundByController: false
I1009 21:15:21.151561  110340 pv_controller.go:508] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound
I1009 21:15:21.151644  110340 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Available
I1009 21:15:21.151701  110340 pv_controller.go:782] updating PersistentVolume[pv-i-prebound]: phase Available already set
I1009 21:15:21.151847  110340 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound", version 58319
I1009 21:15:21.151963  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:21.152137  110340 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: )", boundByController: false
I1009 21:15:21.152246  110340 pv_controller.go:933] binding volume "pv-i-prebound" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound"
I1009 21:15:21.152306  110340 pv_controller.go:831] updating PersistentVolume[pv-i-prebound]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound"
I1009 21:15:21.152385  110340 pv_controller.go:851] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I1009 21:15:21.154675  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (2.631355ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.155484  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.813756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I1009 21:15:21.155773  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 58321
I1009 21:15:21.155817  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: 8ad666a0-d054-489c-9495-e45795d171cb)", boundByController: false
I1009 21:15:21.155874  110340 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound
I1009 21:15:21.155892  110340 pv_controller.go:557] synchronizing PersistentVolume[pv-i-prebound]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:21.155904  110340 pv_controller.go:608] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1009 21:15:21.156029  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 58321
I1009 21:15:21.156110  110340 pv_controller.go:864] updating PersistentVolume[pv-i-prebound]: bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound"
I1009 21:15:21.156150  110340 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1009 21:15:21.159280  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (3.292061ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.159566  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.178324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I1009 21:15:21.159815  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 58323
I1009 21:15:21.159853  110340 pv_controller.go:800] volume "pv-i-prebound" entered phase "Bound"
I1009 21:15:21.159865  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1009 21:15:21.159879  110340 pv_controller.go:903] volume "pv-i-prebound" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound"
I1009 21:15:21.159895  110340 scheduling_queue.go:883] About to try and schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned
I1009 21:15:21.159910  110340 scheduler.go:598] Attempting to schedule pod: volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned
E1009 21:15:21.160091  110340 factory.go:661] Error scheduling volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I1009 21:15:21.160114  110340 scheduler.go:746] Updating pod condition for volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
I1009 21:15:21.160142  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 58323
I1009 21:15:21.160168  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: 8ad666a0-d054-489c-9495-e45795d171cb)", boundByController: false
I1009 21:15:21.160177  110340 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound
I1009 21:15:21.160193  110340 pv_controller.go:557] synchronizing PersistentVolume[pv-i-prebound]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:21.160205  110340 pv_controller.go:608] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1009 21:15:21.162928  110340 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.783459ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43784]
I1009 21:15:21.163055  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-i-pv-prebound: (2.154062ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I1009 21:15:21.163282  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (2.383783ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43786]
I1009 21:15:21.163537  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound" with version 58324
I1009 21:15:21.163562  110340 pv_controller.go:914] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I1009 21:15:21.163573  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound] status: set phase Bound
I1009 21:15:21.165138  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned/status: (3.138053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
E1009 21:15:21.165492  110340 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I1009 21:15:21.165905  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-i-pv-prebound/status: (2.110269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43784]
I1009 21:15:21.166115  110340 scheduling_queue.go:883] About to try and schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned
I1009 21:15:21.166130  110340 scheduler.go:598] Attempting to schedule pod: volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned
I1009 21:15:21.166140  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound" with version 58327
I1009 21:15:21.166167  110340 pv_controller.go:744] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound" entered phase "Bound"
I1009 21:15:21.166184  110340 pv_controller.go:959] volume "pv-i-prebound" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound"
I1009 21:15:21.166207  110340 pv_controller.go:960] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: 8ad666a0-d054-489c-9495-e45795d171cb)", boundByController: false
I1009 21:15:21.166223  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1009 21:15:21.166264  110340 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision", version 58320
I1009 21:15:21.166278  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:21.166279  110340 scheduler_binder.go:659] All bound volumes for Pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned" match with Node "node-1"
I1009 21:15:21.166297  110340 scheduler_binder.go:686] No matching volumes for Pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned", PVC "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" on node "node-1"
I1009 21:15:21.166305  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:21.166310  110340 scheduler_binder.go:741] Provisioning for claims of pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I1009 21:15:21.166332  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Pending
I1009 21:15:21.166349  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: phase Pending already set
I1009 21:15:21.166358  110340 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned", node "node-1"
I1009 21:15:21.166366  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound" with version 58327
I1009 21:15:21.166374  110340 scheduler_assume_cache.go:323] Assumed v1.PersistentVolumeClaim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision", version 58320
I1009 21:15:21.166379  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1009 21:15:21.166400  110340 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: 8ad666a0-d054-489c-9495-e45795d171cb)", boundByController: false
I1009 21:15:21.166410  110340 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound]: claim is already correctly bound
I1009 21:15:21.166412  110340 scheduler_binder.go:332] BindPodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned", node "node-1"
I1009 21:15:21.166420  110340 pv_controller.go:933] binding volume "pv-i-prebound" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound"
I1009 21:15:21.166447  110340 pv_controller.go:831] updating PersistentVolume[pv-i-prebound]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound"
I1009 21:15:21.166465  110340 pv_controller.go:843] updating PersistentVolume[pv-i-prebound]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound"
I1009 21:15:21.166476  110340 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1009 21:15:21.166484  110340 pv_controller.go:782] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I1009 21:15:21.166493  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1009 21:15:21.166510  110340 pv_controller.go:918] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I1009 21:15:21.166518  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound] status: set phase Bound
I1009 21:15:21.166534  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound] status: phase Bound already set
I1009 21:15:21.166546  110340 pv_controller.go:959] volume "pv-i-prebound" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound"
I1009 21:15:21.166566  110340 pv_controller.go:960] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: 8ad666a0-d054-489c-9495-e45795d171cb)", boundByController: false
I1009 21:15:21.166580  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1009 21:15:21.166612  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-canprovision", UID:"776bea0e-3000-46bb-abc4-af3612896b4d", APIVersion:"v1", ResourceVersion:"58320", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1009 21:15:21.169638  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (2.944734ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.170194  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58330
I1009 21:15:21.170235  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:21.170259  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:21.170276  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: started
I1009 21:15:21.170294  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[776bea0e-3000-46bb-abc4-af3612896b4d]]
I1009 21:15:21.170346  110340 pv_controller.go:1367] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] started, class: "wait-z4kb"
I1009 21:15:21.171283  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (4.417896ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43782]
I1009 21:15:21.172961  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (2.343269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.173034  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58331
I1009 21:15:21.173057  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:21.173079  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:21.173086  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: started
I1009 21:15:21.173099  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[776bea0e-3000-46bb-abc4-af3612896b4d]]
I1009 21:15:21.173105  110340 pv_controller.go:1637] operation "provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[776bea0e-3000-46bb-abc4-af3612896b4d]" is already running, skipping
I1009 21:15:21.173278  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58331
I1009 21:15:21.174560  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-776bea0e-3000-46bb-abc4-af3612896b4d: (1.098673ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.174914  110340 pv_controller.go:1471] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" created
I1009 21:15:21.174947  110340 pv_controller.go:1488] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: trying to save volume pvc-776bea0e-3000-46bb-abc4-af3612896b4d
I1009 21:15:21.177171  110340 httplog.go:90] POST /api/v1/persistentvolumes: (1.974542ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.177701  110340 pv_controller.go:1496] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" saved
I1009 21:15:21.177745  110340 pv_controller_base.go:509] storeObjectUpdate: adding volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d", version 58332
I1009 21:15:21.177768  110340 pv_controller.go:1549] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" provisioned for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.178031  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-canprovision", UID:"776bea0e-3000-46bb-abc4-af3612896b4d", APIVersion:"v1", ResourceVersion:"58331", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-776bea0e-3000-46bb-abc4-af3612896b4d using kubernetes.io/mock-provisioner
I1009 21:15:21.179009  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" with version 58332
I1009 21:15:21.179056  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 776bea0e-3000-46bb-abc4-af3612896b4d)", boundByController: true
I1009 21:15:21.179070  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:21.179088  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:21.179102  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:21.179135  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58331
I1009 21:15:21.179155  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:21.179185  110340 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" found: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 776bea0e-3000-46bb-abc4-af3612896b4d)", boundByController: true
I1009 21:15:21.179199  110340 pv_controller.go:933] binding volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.179213  110340 pv_controller.go:831] updating PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.179231  110340 pv_controller.go:843] updating PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.179242  110340 pv_controller.go:779] updating PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: set phase Bound
I1009 21:15:21.180577  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (2.436013ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.180980  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-776bea0e-3000-46bb-abc4-af3612896b4d/status: (1.482583ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43782]
I1009 21:15:21.181511  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" with version 58334
I1009 21:15:21.181535  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" with version 58334
I1009 21:15:21.181549  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 776bea0e-3000-46bb-abc4-af3612896b4d)", boundByController: true
I1009 21:15:21.181559  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:21.181621  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:21.181633  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:21.181562  110340 pv_controller.go:800] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" entered phase "Bound"
I1009 21:15:21.181657  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: binding to "pvc-776bea0e-3000-46bb-abc4-af3612896b4d"
I1009 21:15:21.181671  110340 pv_controller.go:903] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.184621  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (2.639281ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.184893  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58335
I1009 21:15:21.184929  110340 pv_controller.go:914] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: bound to "pvc-776bea0e-3000-46bb-abc4-af3612896b4d"
I1009 21:15:21.184941  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Bound
I1009 21:15:21.187355  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision/status: (2.20387ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.187968  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58336
I1009 21:15:21.188005  110340 pv_controller.go:744] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" entered phase "Bound"
I1009 21:15:21.188023  110340 pv_controller.go:959] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.188057  110340 pv_controller.go:960] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 776bea0e-3000-46bb-abc4-af3612896b4d)", boundByController: true
I1009 21:15:21.188150  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-776bea0e-3000-46bb-abc4-af3612896b4d", bindCompleted: true, boundByController: true
I1009 21:15:21.188383  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58336
I1009 21:15:21.188461  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Bound, bound to: "pvc-776bea0e-3000-46bb-abc4-af3612896b4d", bindCompleted: true, boundByController: true
I1009 21:15:21.188486  110340 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" found: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 776bea0e-3000-46bb-abc4-af3612896b4d)", boundByController: true
I1009 21:15:21.188499  110340 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: claim is already correctly bound
I1009 21:15:21.188510  110340 pv_controller.go:933] binding volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.188522  110340 pv_controller.go:831] updating PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.188542  110340 pv_controller.go:843] updating PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.188551  110340 pv_controller.go:779] updating PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: set phase Bound
I1009 21:15:21.188560  110340 pv_controller.go:782] updating PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: phase Bound already set
I1009 21:15:21.188570  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: binding to "pvc-776bea0e-3000-46bb-abc4-af3612896b4d"
I1009 21:15:21.188590  110340 pv_controller.go:918] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: already bound to "pvc-776bea0e-3000-46bb-abc4-af3612896b4d"
I1009 21:15:21.188597  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Bound
I1009 21:15:21.188610  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: phase Bound already set
I1009 21:15:21.188623  110340 pv_controller.go:959] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:21.188639  110340 pv_controller.go:960] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 776bea0e-3000-46bb-abc4-af3612896b4d)", boundByController: true
I1009 21:15:21.188649  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-776bea0e-3000-46bb-abc4-af3612896b4d", bindCompleted: true, boundByController: true
I1009 21:15:21.262885  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (1.950089ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.363235  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (2.199775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.459164  110340 cache.go:669] Couldn't expire cache for pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned. Binding is still in progress.
I1009 21:15:21.462675  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (1.737432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.563296  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (2.343901ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.663098  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (2.143422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.762944  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (2.028994ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.863349  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (2.251807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:21.962756  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (1.704777ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.062689  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (1.578602ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.163541  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (2.521675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.170196  110340 scheduler_binder.go:553] All PVCs for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned" are bound
I1009 21:15:22.170304  110340 factory.go:710] Attempting to bind pod-i-pv-prebound-w-provisioned to node-1
I1009 21:15:22.174904  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned/binding: (4.06945ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.175203  110340 scheduler.go:730] pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1009 21:15:22.178200  110340 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (2.655681ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.263247  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-pv-prebound-w-provisioned: (2.239113ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.265316  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-i-pv-prebound: (1.544723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.266874  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.101736ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.268254  110340 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (961.815µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.274805  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (6.272712ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.281331  110340 pv_controller_base.go:265] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" deleted
I1009 21:15:22.281377  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" with version 58334
I1009 21:15:22.281406  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 776bea0e-3000-46bb-abc4-af3612896b4d)", boundByController: true
I1009 21:15:22.281415  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:22.282852  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.213022ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43782]
I1009 21:15:22.283088  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision not found
I1009 21:15:22.283136  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (7.85841ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.283135  110340 pv_controller.go:577] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" is released and reclaim policy "Delete" will be executed
I1009 21:15:22.283150  110340 pv_controller.go:779] updating PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: set phase Released
I1009 21:15:22.283474  110340 pv_controller_base.go:265] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound" deleted
I1009 21:15:22.286523  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-776bea0e-3000-46bb-abc4-af3612896b4d/status: (3.094145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43782]
I1009 21:15:22.286818  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" with version 58431
I1009 21:15:22.286870  110340 pv_controller.go:800] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" entered phase "Released"
I1009 21:15:22.286884  110340 pv_controller.go:1024] reclaimVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: policy is Delete
I1009 21:15:22.286911  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-776bea0e-3000-46bb-abc4-af3612896b4d[4550ac03-fb8e-4ffe-ab76-95fe9796508f]]
I1009 21:15:22.286944  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 58323
I1009 21:15:22.286976  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: 8ad666a0-d054-489c-9495-e45795d171cb)", boundByController: false
I1009 21:15:22.286989  110340 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound
I1009 21:15:22.287010  110340 pv_controller.go:549] synchronizing PersistentVolume[pv-i-prebound]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound not found
I1009 21:15:22.287025  110340 pv_controller.go:577] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I1009 21:15:22.287036  110340 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Released
I1009 21:15:22.287161  110340 pv_controller.go:1148] deleteVolumeOperation [pvc-776bea0e-3000-46bb-abc4-af3612896b4d] started
I1009 21:15:22.289037  110340 store.go:231] deletion of /6f243518-070e-43d7-8ff6-ea97e6b7a363/persistentvolumes/pv-i-prebound failed because of a conflict, going to retry
I1009 21:15:22.289232  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.935594ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43782]
I1009 21:15:22.289302  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-776bea0e-3000-46bb-abc4-af3612896b4d: (1.668409ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.289419  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 58432
I1009 21:15:22.289439  110340 pv_controller.go:800] volume "pv-i-prebound" entered phase "Released"
I1009 21:15:22.289449  110340 pv_controller.go:1013] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1009 21:15:22.289475  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" with version 58431
I1009 21:15:22.289501  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: phase: Released, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 776bea0e-3000-46bb-abc4-af3612896b4d)", boundByController: true
I1009 21:15:22.289514  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:22.289529  110340 pv_controller.go:1252] isVolumeReleased[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: volume is released
I1009 21:15:22.289537  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision not found
I1009 21:15:22.289542  110340 pv_controller.go:1287] doDeleteVolume [pvc-776bea0e-3000-46bb-abc4-af3612896b4d]
I1009 21:15:22.289545  110340 pv_controller.go:1024] reclaimVolume[pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: policy is Delete
I1009 21:15:22.289565  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-776bea0e-3000-46bb-abc4-af3612896b4d[4550ac03-fb8e-4ffe-ab76-95fe9796508f]]
I1009 21:15:22.289571  110340 pv_controller.go:1318] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" deleted
I1009 21:15:22.289582  110340 pv_controller.go:1195] deleteVolumeOperation [pvc-776bea0e-3000-46bb-abc4-af3612896b4d]: success
I1009 21:15:22.289573  110340 pv_controller.go:1637] operation "delete-pvc-776bea0e-3000-46bb-abc4-af3612896b4d[4550ac03-fb8e-4ffe-ab76-95fe9796508f]" is already running, skipping
I1009 21:15:22.289599  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 58432
I1009 21:15:22.289617  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Released, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound (uid: 8ad666a0-d054-489c-9495-e45795d171cb)", boundByController: false
I1009 21:15:22.289628  110340 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound
I1009 21:15:22.289648  110340 pv_controller.go:549] synchronizing PersistentVolume[pv-i-prebound]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound not found
I1009 21:15:22.289653  110340 pv_controller.go:1013] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1009 21:15:22.290973  110340 pv_controller_base.go:216] volume "pv-i-prebound" deleted
I1009 21:15:22.291012  110340 pv_controller_base.go:403] deletion of claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-i-pv-prebound" was already processed
I1009 21:15:22.293399  110340 pv_controller_base.go:216] volume "pvc-776bea0e-3000-46bb-abc4-af3612896b4d" deleted
I1009 21:15:22.293438  110340 pv_controller_base.go:403] deletion of claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" was already processed
I1009 21:15:22.294030  110340 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-776bea0e-3000-46bb-abc4-af3612896b4d: (3.873407ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.294065  110340 httplog.go:90] DELETE /api/v1/persistentvolumes: (10.321439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I1009 21:15:22.304472  110340 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (10.003797ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.304742  110340 volume_binding_test.go:739] Running test wait one pv prebound, one provisioned
I1009 21:15:22.306699  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.687927ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.308349  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.217211ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.310034  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.24659ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.311816  110340 httplog.go:90] POST /api/v1/persistentvolumes: (1.484616ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.312394  110340 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-prebound", version 58441
I1009 21:15:22.312547  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound (uid: )", boundByController: false
I1009 21:15:22.312637  110340 pv_controller.go:508] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound
I1009 21:15:22.312743  110340 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Available
I1009 21:15:22.315512  110340 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound", version 58442
I1009 21:15:22.315542  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:22.315782  110340 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound (uid: )", boundByController: false
I1009 21:15:22.315798  110340 pv_controller.go:933] binding volume "pv-w-prebound" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound"
I1009 21:15:22.315812  110340 pv_controller.go:831] updating PersistentVolume[pv-w-prebound]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound"
I1009 21:15:22.315857  110340 pv_controller.go:851] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1009 21:15:22.316072  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (3.707317ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.318406  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.034565ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43782]
I1009 21:15:22.318583  110340 store.go:365] GuaranteedUpdate of /6f243518-070e-43d7-8ff6-ea97e6b7a363/persistentvolumes/pv-w-prebound failed because of a conflict, going to retry
I1009 21:15:22.319024  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 58444
I1009 21:15:22.319040  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.969388ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.319051  110340 pv_controller.go:864] updating PersistentVolume[pv-w-prebound]: bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound"
I1009 21:15:22.319064  110340 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1009 21:15:22.319282  110340 pv_controller.go:792] updating PersistentVolume[pv-w-prebound]: set phase Available failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1009 21:15:22.319299  110340 pv_controller_base.go:204] could not sync volume "pv-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1009 21:15:22.319325  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 58444
I1009 21:15:22.319353  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound (uid: 826a6d78-2850-448b-8273-bf4c9446eabd)", boundByController: false
I1009 21:15:22.319365  110340 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound
I1009 21:15:22.319384  110340 pv_controller.go:557] synchronizing PersistentVolume[pv-w-prebound]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:22.319400  110340 pv_controller.go:608] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1009 21:15:22.320420  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (3.763275ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43932]
I1009 21:15:22.321452  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.744767ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43782]
I1009 21:15:22.321692  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 58445
I1009 21:15:22.321714  110340 pv_controller.go:800] volume "pv-w-prebound" entered phase "Bound"
I1009 21:15:22.321728  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1009 21:15:22.321740  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 58445
I1009 21:15:22.321748  110340 pv_controller.go:903] volume "pv-w-prebound" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound"
I1009 21:15:22.321773  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound (uid: 826a6d78-2850-448b-8273-bf4c9446eabd)", boundByController: false
I1009 21:15:22.321789  110340 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound
I1009 21:15:22.321805  110340 pv_controller.go:557] synchronizing PersistentVolume[pv-w-prebound]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:22.321901  110340 pv_controller.go:608] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1009 21:15:22.324924  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-w-pv-prebound: (2.929018ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43782]
I1009 21:15:22.324961  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (3.822347ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.325432  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound" with version 58447
I1009 21:15:22.325653  110340 pv_controller.go:914] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I1009 21:15:22.325746  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound] status: set phase Bound
I1009 21:15:22.327043  110340 scheduling_queue.go:883] About to try and schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned
I1009 21:15:22.327066  110340 scheduler.go:598] Attempting to schedule pod: volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned
I1009 21:15:22.327283  110340 scheduler_binder.go:659] All bound volumes for Pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned" match with Node "node-1"
I1009 21:15:22.327403  110340 scheduler_binder.go:686] No matching volumes for Pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned", PVC "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" on node "node-1"
I1009 21:15:22.327426  110340 scheduler_binder.go:741] Provisioning for claims of pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I1009 21:15:22.327494  110340 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned", node "node-1"
I1009 21:15:22.327520  110340 scheduler_assume_cache.go:323] Assumed v1.PersistentVolumeClaim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision", version 58443
I1009 21:15:22.327622  110340 scheduler_binder.go:332] BindPodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned", node "node-1"
I1009 21:15:22.329097  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-w-pv-prebound/status: (2.968484ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.329341  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound" with version 58449
I1009 21:15:22.329379  110340 pv_controller.go:744] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound" entered phase "Bound"
I1009 21:15:22.329399  110340 pv_controller.go:959] volume "pv-w-prebound" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound"
I1009 21:15:22.329424  110340 pv_controller.go:960] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound (uid: 826a6d78-2850-448b-8273-bf4c9446eabd)", boundByController: false
I1009 21:15:22.329447  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1009 21:15:22.329488  110340 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision", version 58443
I1009 21:15:22.329514  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:22.329544  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:22.329688  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Pending
I1009 21:15:22.329920  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: phase Pending already set
I1009 21:15:22.330050  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound" with version 58449
I1009 21:15:22.330163  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1009 21:15:22.330341  110340 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound (uid: 826a6d78-2850-448b-8273-bf4c9446eabd)", boundByController: false
I1009 21:15:22.330441  110340 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound]: claim is already correctly bound
I1009 21:15:22.330526  110340 pv_controller.go:933] binding volume "pv-w-prebound" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound"
I1009 21:15:22.330645  110340 pv_controller.go:831] updating PersistentVolume[pv-w-prebound]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound"
I1009 21:15:22.331041  110340 pv_controller.go:843] updating PersistentVolume[pv-w-prebound]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound"
I1009 21:15:22.331426  110340 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1009 21:15:22.331514  110340 pv_controller.go:782] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I1009 21:15:22.329790  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-canprovision", UID:"38bea0ce-535c-4332-96f4-e0cfa5e0bd85", APIVersion:"v1", ResourceVersion:"58443", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1009 21:15:22.330963  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (3.021014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43930]
I1009 21:15:22.331584  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1009 21:15:22.331811  110340 pv_controller.go:918] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I1009 21:15:22.331824  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound] status: set phase Bound
I1009 21:15:22.331926  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound] status: phase Bound already set
I1009 21:15:22.331942  110340 pv_controller.go:959] volume "pv-w-prebound" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound"
I1009 21:15:22.331971  110340 pv_controller.go:960] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound (uid: 826a6d78-2850-448b-8273-bf4c9446eabd)", boundByController: false
I1009 21:15:22.331988  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1009 21:15:22.332022  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58450
I1009 21:15:22.332036  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:22.332066  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:22.332076  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: started
I1009 21:15:22.332092  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[38bea0ce-535c-4332-96f4-e0cfa5e0bd85]]
I1009 21:15:22.332147  110340 pv_controller.go:1367] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] started, class: "wait-cztn"
I1009 21:15:22.334008  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (2.255783ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.334536  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.899629ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43930]
I1009 21:15:22.334762  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58452
I1009 21:15:22.334782  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:22.334805  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: no volume found
I1009 21:15:22.334813  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: started
I1009 21:15:22.334826  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[38bea0ce-535c-4332-96f4-e0cfa5e0bd85]]
I1009 21:15:22.334858  110340 pv_controller.go:1637] operation "provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision[38bea0ce-535c-4332-96f4-e0cfa5e0bd85]" is already running, skipping
I1009 21:15:22.335106  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58452
I1009 21:15:22.336292  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85: (923.523µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.336557  110340 pv_controller.go:1471] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" created
I1009 21:15:22.336592  110340 pv_controller.go:1488] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: trying to save volume pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85
I1009 21:15:22.339306  110340 pv_controller_base.go:509] storeObjectUpdate: adding volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85", version 58453
I1009 21:15:22.339354  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 38bea0ce-535c-4332-96f4-e0cfa5e0bd85)", boundByController: true
I1009 21:15:22.339424  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:22.339481  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:22.339506  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:22.339586  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58452
I1009 21:15:22.339717  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:22.339813  110340 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" found: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 38bea0ce-535c-4332-96f4-e0cfa5e0bd85)", boundByController: true
I1009 21:15:22.339914  110340 pv_controller.go:933] binding volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.339958  110340 pv_controller.go:831] updating PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.340079  110340 pv_controller.go:843] updating PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.340140  110340 pv_controller.go:779] updating PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: set phase Bound
I1009 21:15:22.340852  110340 httplog.go:90] POST /api/v1/persistentvolumes: (3.964425ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.341221  110340 pv_controller.go:1496] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" saved
I1009 21:15:22.341362  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" with version 58453
I1009 21:15:22.341495  110340 pv_controller.go:1549] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" provisioned for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.341892  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-canprovision", UID:"38bea0ce-535c-4332-96f4-e0cfa5e0bd85", APIVersion:"v1", ResourceVersion:"58452", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85 using kubernetes.io/mock-provisioner
I1009 21:15:22.342864  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" with version 58454
I1009 21:15:22.342941  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 38bea0ce-535c-4332-96f4-e0cfa5e0bd85)", boundByController: true
I1009 21:15:22.342950  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:22.342964  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:22.342975  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:22.343049  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85/status: (2.525916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43934]
I1009 21:15:22.343441  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" with version 58454
I1009 21:15:22.343475  110340 pv_controller.go:800] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" entered phase "Bound"
I1009 21:15:22.343490  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: binding to "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85"
I1009 21:15:22.343511  110340 pv_controller.go:903] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.345123  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (3.075939ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.347869  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (2.401734ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.348098  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58456
I1009 21:15:22.348171  110340 pv_controller.go:914] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: bound to "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85"
I1009 21:15:22.348184  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Bound
I1009 21:15:22.350788  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision/status: (2.152275ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.351356  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58457
I1009 21:15:22.351396  110340 pv_controller.go:744] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" entered phase "Bound"
I1009 21:15:22.351416  110340 pv_controller.go:959] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.351441  110340 pv_controller.go:960] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 38bea0ce-535c-4332-96f4-e0cfa5e0bd85)", boundByController: true
I1009 21:15:22.351460  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85", bindCompleted: true, boundByController: true
I1009 21:15:22.351516  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" with version 58457
I1009 21:15:22.351536  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: phase: Bound, bound to: "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85", bindCompleted: true, boundByController: true
I1009 21:15:22.351555  110340 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" found: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 38bea0ce-535c-4332-96f4-e0cfa5e0bd85)", boundByController: true
I1009 21:15:22.351566  110340 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: claim is already correctly bound
I1009 21:15:22.351575  110340 pv_controller.go:933] binding volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.351590  110340 pv_controller.go:831] updating PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.351606  110340 pv_controller.go:843] updating PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.351615  110340 pv_controller.go:779] updating PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: set phase Bound
I1009 21:15:22.351623  110340 pv_controller.go:782] updating PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: phase Bound already set
I1009 21:15:22.351632  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: binding to "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85"
I1009 21:15:22.351652  110340 pv_controller.go:918] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision]: already bound to "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85"
I1009 21:15:22.351661  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: set phase Bound
I1009 21:15:22.351678  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision] status: phase Bound already set
I1009 21:15:22.351689  110340 pv_controller.go:959] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision"
I1009 21:15:22.351705  110340 pv_controller.go:960] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 38bea0ce-535c-4332-96f4-e0cfa5e0bd85)", boundByController: true
I1009 21:15:22.351719  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85", bindCompleted: true, boundByController: true
I1009 21:15:22.427443  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (1.640896ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.462645  110340 cache.go:669] Couldn't expire cache for pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned. Binding is still in progress.
I1009 21:15:22.528295  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (2.632144ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.627346  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (1.681388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.727711  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (1.92842ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.828058  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (2.309266ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:22.927544  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (1.898888ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.027489  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (1.728081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.127368  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (1.63844ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.227665  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (1.880051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.328562  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (2.794426ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.332747  110340 scheduler_binder.go:553] All PVCs for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned" are bound
I1009 21:15:23.332807  110340 factory.go:710] Attempting to bind pod-w-pv-prebound-w-provisioned to node-1
I1009 21:15:23.335368  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned/binding: (2.187511ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.335564  110340 scheduler.go:730] pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-w-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1009 21:15:23.337783  110340 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.896757ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.428780  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-w-pv-prebound-w-provisioned: (2.967927ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.431032  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-w-pv-prebound: (1.546273ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.433581  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (2.037089ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.437267  110340 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (3.136138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.450079  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (11.972499ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.456312  110340 pv_controller_base.go:265] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" deleted
I1009 21:15:23.456630  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" with version 58454
I1009 21:15:23.456799  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 38bea0ce-535c-4332-96f4-e0cfa5e0bd85)", boundByController: true
I1009 21:15:23.457184  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:23.458728  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-canprovision: (1.104392ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43934]
I1009 21:15:23.459049  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision not found
I1009 21:15:23.459067  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (8.184586ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.459073  110340 pv_controller.go:577] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" is released and reclaim policy "Delete" will be executed
I1009 21:15:23.459089  110340 pv_controller.go:779] updating PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: set phase Released
I1009 21:15:23.459153  110340 pv_controller_base.go:265] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound" deleted
I1009 21:15:23.461935  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85/status: (2.535874ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43934]
I1009 21:15:23.462404  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" with version 58547
I1009 21:15:23.462429  110340 pv_controller.go:800] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" entered phase "Released"
I1009 21:15:23.462438  110340 pv_controller.go:1024] reclaimVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: policy is Delete
I1009 21:15:23.462456  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85[dfb06956-959d-4745-9e3e-252d529cef4c]]
I1009 21:15:23.462482  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 58445
I1009 21:15:23.462506  110340 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound (uid: 826a6d78-2850-448b-8273-bf4c9446eabd)", boundByController: false
I1009 21:15:23.462518  110340 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound
I1009 21:15:23.462542  110340 pv_controller.go:549] synchronizing PersistentVolume[pv-w-prebound]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound not found
I1009 21:15:23.462555  110340 pv_controller.go:577] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I1009 21:15:23.462563  110340 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Released
I1009 21:15:23.462597  110340 pv_controller.go:1148] deleteVolumeOperation [pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85] started
I1009 21:15:23.465364  110340 store.go:365] GuaranteedUpdate of /6f243518-070e-43d7-8ff6-ea97e6b7a363/persistentvolumes/pv-w-prebound failed because of a conflict, going to retry
I1009 21:15:23.465559  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.670793ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43934]
I1009 21:15:23.465820  110340 pv_controller.go:792] updating PersistentVolume[pv-w-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": StorageError: invalid object, Code: 4, Key: /6f243518-070e-43d7-8ff6-ea97e6b7a363/persistentvolumes/pv-w-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: fab0ddf6-cf82-4686-b379-7ed81280cb4c, UID in object meta: 
I1009 21:15:23.465868  110340 pv_controller_base.go:204] could not sync volume "pv-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": StorageError: invalid object, Code: 4, Key: /6f243518-070e-43d7-8ff6-ea97e6b7a363/persistentvolumes/pv-w-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: fab0ddf6-cf82-4686-b379-7ed81280cb4c, UID in object meta: 
I1009 21:15:23.465912  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" with version 58547
I1009 21:15:23.465950  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: phase: Released, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision (uid: 38bea0ce-535c-4332-96f4-e0cfa5e0bd85)", boundByController: true
I1009 21:15:23.465965  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision
I1009 21:15:23.465986  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision not found
I1009 21:15:23.465996  110340 pv_controller.go:1024] reclaimVolume[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: policy is Delete
I1009 21:15:23.466020  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85[dfb06956-959d-4745-9e3e-252d529cef4c]]
I1009 21:15:23.466029  110340 pv_controller.go:1637] operation "delete-pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85[dfb06956-959d-4745-9e3e-252d529cef4c]" is already running, skipping
I1009 21:15:23.466039  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85: (2.807075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:23.466053  110340 pv_controller_base.go:216] volume "pv-w-prebound" deleted
I1009 21:15:23.466153  110340 pv_controller_base.go:403] deletion of claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-pv-prebound" was already processed
I1009 21:15:23.466250  110340 pv_controller.go:1252] isVolumeReleased[pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: volume is released
I1009 21:15:23.466262  110340 pv_controller.go:1287] doDeleteVolume [pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]
I1009 21:15:23.466287  110340 pv_controller.go:1318] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" deleted
I1009 21:15:23.466297  110340 pv_controller.go:1195] deleteVolumeOperation [pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85]: success
I1009 21:15:23.468188  110340 pv_controller_base.go:216] volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" deleted
I1009 21:15:23.468236  110340 pv_controller_base.go:403] deletion of claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-canprovision" was already processed
I1009 21:15:23.468586  110340 httplog.go:90] DELETE /api/v1/persistentvolumes: (9.206412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.470412  110340 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85: (3.512544ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:23.470593  110340 pv_controller.go:1202] failed to delete volume "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" from database: persistentvolumes "pvc-38bea0ce-535c-4332-96f4-e0cfa5e0bd85" not found
I1009 21:15:23.478599  110340 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (9.62382ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I1009 21:15:23.478935  110340 volume_binding_test.go:739] Running test immediate provisioned by controller
I1009 21:15:23.481098  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.890102ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:23.483471  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.902028ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:23.485635  110340 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.48772ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:23.487727  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (1.521196ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:23.488349  110340 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned", version 58562
I1009 21:15:23.488385  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:23.488410  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: no volume found
I1009 21:15:23.488419  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: started
I1009 21:15:23.488433  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned[b00c2de0-9076-48cb-b5b3-8a430a594f67]]
I1009 21:15:23.488566  110340 pv_controller.go:1367] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned] started, class: "immediate-btnp"
I1009 21:15:23.490951  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" with version 58563
I1009 21:15:23.490987  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:23.491013  110340 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: no volume found
I1009 21:15:23.491022  110340 pv_controller.go:1328] provisionClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: started
I1009 21:15:23.491038  110340 pv_controller.go:1626] scheduleOperation[provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned[b00c2de0-9076-48cb-b5b3-8a430a594f67]]
I1009 21:15:23.491046  110340 pv_controller.go:1637] operation "provision-volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned[b00c2de0-9076-48cb-b5b3-8a430a594f67]" is already running, skipping
I1009 21:15:23.491310  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-controller-provisioned: (2.377491ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43934]
I1009 21:15:23.491346  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (2.615651ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:23.491732  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" with version 58563
I1009 21:15:23.492163  110340 scheduling_queue.go:883] About to try and schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound
I1009 21:15:23.492189  110340 scheduler.go:598] Attempting to schedule pod: volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound
E1009 21:15:23.492371  110340 factory.go:661] Error scheduling volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound: pod has unbound immediate PersistentVolumeClaims; retrying
I1009 21:15:23.492406  110340 scheduler.go:746] Updating pod condition for volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound to (PodScheduled==False, Reason=Unschedulable)
I1009 21:15:23.493670  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67: (1.673864ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43934]
I1009 21:15:23.493952  110340 pv_controller.go:1471] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" created
I1009 21:15:23.493976  110340 pv_controller.go:1488] provisionClaimOperation [volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: trying to save volume pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67
I1009 21:15:23.494685  110340 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.614271ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:23.495157  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (2.184216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44106]
I1009 21:15:23.495157  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound/status: (2.542299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
E1009 21:15:23.495479  110340 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I1009 21:15:23.496765  110340 httplog.go:90] POST /api/v1/persistentvolumes: (2.581098ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43934]
I1009 21:15:23.497052  110340 pv_controller_base.go:509] storeObjectUpdate: adding volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67", version 58568
I1009 21:15:23.497126  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned (uid: b00c2de0-9076-48cb-b5b3-8a430a594f67)", boundByController: true
I1009 21:15:23.497125  110340 pv_controller.go:1496] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" saved
I1009 21:15:23.497141  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned
I1009 21:15:23.497154  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" with version 58568
I1009 21:15:23.497165  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:23.497177  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:23.497183  110340 pv_controller.go:1549] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" provisioned for claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.497217  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" with version 58563
I1009 21:15:23.497234  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:23.497266  110340 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" found: phase: Pending, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned (uid: b00c2de0-9076-48cb-b5b3-8a430a594f67)", boundByController: true
I1009 21:15:23.497280  110340 pv_controller.go:933] binding volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.497294  110340 pv_controller.go:831] updating PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.497322  110340 pv_controller.go:843] updating PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.497333  110340 pv_controller.go:779] updating PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: set phase Bound
I1009 21:15:23.497404  110340 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f", Name:"pvc-controller-provisioned", UID:"b00c2de0-9076-48cb-b5b3-8a430a594f67", APIVersion:"v1", ResourceVersion:"58563", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67 using kubernetes.io/mock-provisioner
I1009 21:15:23.499271  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (1.822241ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:23.500144  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67/status: (2.541517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:23.500796  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" with version 58570
I1009 21:15:23.500896  110340 pv_controller.go:800] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" entered phase "Bound"
I1009 21:15:23.500921  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: binding to "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67"
I1009 21:15:23.500943  110340 pv_controller.go:903] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.500959  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" with version 58570
I1009 21:15:23.501001  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned (uid: b00c2de0-9076-48cb-b5b3-8a430a594f67)", boundByController: true
I1009 21:15:23.501115  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned
I1009 21:15:23.501451  110340 pv_controller.go:557] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1009 21:15:23.501563  110340 pv_controller.go:605] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: volume not bound yet, waiting for syncClaim to fix it
I1009 21:15:23.504430  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-controller-provisioned: (2.800837ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:23.504665  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" with version 58572
I1009 21:15:23.504728  110340 pv_controller.go:914] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: bound to "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67"
I1009 21:15:23.504811  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned] status: set phase Bound
I1009 21:15:23.510779  110340 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-controller-provisioned/status: (5.651481ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:23.511036  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" with version 58573
I1009 21:15:23.511061  110340 pv_controller.go:744] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" entered phase "Bound"
I1009 21:15:23.511076  110340 pv_controller.go:959] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.511093  110340 pv_controller.go:960] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned (uid: b00c2de0-9076-48cb-b5b3-8a430a594f67)", boundByController: true
I1009 21:15:23.511105  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67", bindCompleted: true, boundByController: true
I1009 21:15:23.511193  110340 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" with version 58573
I1009 21:15:23.511208  110340 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: phase: Bound, bound to: "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67", bindCompleted: true, boundByController: true
I1009 21:15:23.511221  110340 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" found: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned (uid: b00c2de0-9076-48cb-b5b3-8a430a594f67)", boundByController: true
I1009 21:15:23.511229  110340 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: claim is already correctly bound
I1009 21:15:23.511235  110340 pv_controller.go:933] binding volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.511243  110340 pv_controller.go:831] updating PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: binding to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.511262  110340 pv_controller.go:843] updating PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: already bound to "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.511270  110340 pv_controller.go:779] updating PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: set phase Bound
I1009 21:15:23.511276  110340 pv_controller.go:782] updating PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: phase Bound already set
I1009 21:15:23.511282  110340 pv_controller.go:871] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: binding to "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67"
I1009 21:15:23.511295  110340 pv_controller.go:918] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned]: already bound to "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67"
I1009 21:15:23.511302  110340 pv_controller.go:685] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned] status: set phase Bound
I1009 21:15:23.511315  110340 pv_controller.go:730] updating PersistentVolumeClaim[volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned] status: phase Bound already set
I1009 21:15:23.511323  110340 pv_controller.go:959] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" bound to claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned"
I1009 21:15:23.511335  110340 pv_controller.go:960] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" status after binding: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned (uid: b00c2de0-9076-48cb-b5b3-8a430a594f67)", boundByController: true
I1009 21:15:23.511352  110340 pv_controller.go:961] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67", bindCompleted: true, boundByController: true
I1009 21:15:23.597889  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (2.934609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:23.693757  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.697822ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:23.794172  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.904276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:23.894132  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (2.049694ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:23.994101  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.991972ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.094372  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (2.210633ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.195076  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (2.985055ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.295417  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (3.313478ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.394079  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.988713ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.493799  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.653833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.594095  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.97563ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.695209  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (3.070869ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.794055  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.896319ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.894055  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.91891ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:24.994243  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (2.195819ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.094774  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (2.617999ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.194119  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.960147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.294190  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.787383ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.394893  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (2.748361ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.462858  110340 scheduling_queue.go:883] About to try and schedule pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound
I1009 21:15:25.462897  110340 scheduler.go:598] Attempting to schedule pod: volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound
I1009 21:15:25.463089  110340 scheduler_binder.go:659] All bound volumes for Pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound" match with Node "node-1"
I1009 21:15:25.463173  110340 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound", node "node-1"
I1009 21:15:25.463195  110340 scheduler_binder.go:267] AssumePodVolumes for pod "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound", node "node-1": all PVCs bound and nothing to do
I1009 21:15:25.463250  110340 factory.go:710] Attempting to bind pod-i-unbound to node-1
I1009 21:15:25.464277  110340 cache.go:669] Couldn't expire cache for pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound. Binding is still in progress.
I1009 21:15:25.469128  110340 httplog.go:90] POST /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound/binding: (5.448163ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.469397  110340 scheduler.go:730] pod volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-i-unbound is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1009 21:15:25.472737  110340 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/events: (2.956707ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.494049  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods/pod-i-unbound: (1.845111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.496647  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-controller-provisioned: (1.594425ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.503929  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (6.58769ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.510859  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (6.403886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.510940  110340 pv_controller_base.go:265] claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" deleted
I1009 21:15:25.510976  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" with version 58570
I1009 21:15:25.511006  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: phase: Bound, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned (uid: b00c2de0-9076-48cb-b5b3-8a430a594f67)", boundByController: true
I1009 21:15:25.511020  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned
I1009 21:15:25.512450  110340 httplog.go:90] GET /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims/pvc-controller-provisioned: (1.190776ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.512688  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned not found
I1009 21:15:25.512723  110340 pv_controller.go:577] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" is released and reclaim policy "Delete" will be executed
I1009 21:15:25.512738  110340 pv_controller.go:779] updating PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: set phase Released
I1009 21:15:25.517107  110340 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67/status: (3.597007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.517604  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" with version 58675
I1009 21:15:25.517652  110340 pv_controller.go:800] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" entered phase "Released"
I1009 21:15:25.517681  110340 pv_controller.go:1024] reclaimVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: policy is Delete
I1009 21:15:25.517710  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67[6da4b422-f3ee-4608-a1ef-b4021bb04920]]
I1009 21:15:25.517756  110340 pv_controller_base.go:537] storeObjectUpdate updating volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" with version 58675
I1009 21:15:25.517783  110340 pv_controller.go:491] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: phase: Released, bound to: "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned (uid: b00c2de0-9076-48cb-b5b3-8a430a594f67)", boundByController: true
I1009 21:15:25.517801  110340 pv_controller.go:516] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: volume is bound to claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned
I1009 21:15:25.517808  110340 pv_controller.go:1148] deleteVolumeOperation [pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67] started
I1009 21:15:25.517823  110340 pv_controller.go:549] synchronizing PersistentVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: claim volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned not found
I1009 21:15:25.517846  110340 pv_controller.go:1024] reclaimVolume[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: policy is Delete
I1009 21:15:25.517856  110340 pv_controller.go:1626] scheduleOperation[delete-pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67[6da4b422-f3ee-4608-a1ef-b4021bb04920]]
I1009 21:15:25.517862  110340 pv_controller.go:1637] operation "delete-pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67[6da4b422-f3ee-4608-a1ef-b4021bb04920]" is already running, skipping
I1009 21:15:25.520128  110340 httplog.go:90] GET /api/v1/persistentvolumes/pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67: (1.203927ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.520326  110340 httplog.go:90] DELETE /api/v1/persistentvolumes: (8.997544ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:25.520390  110340 pv_controller_base.go:216] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" deleted
I1009 21:15:25.520397  110340 pv_controller.go:1252] isVolumeReleased[pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: volume is released
I1009 21:15:25.520413  110340 pv_controller.go:1287] doDeleteVolume [pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]
I1009 21:15:25.520442  110340 pv_controller_base.go:403] deletion of claim "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-controller-provisioned" was already processed
I1009 21:15:25.520444  110340 pv_controller.go:1318] volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" deleted
I1009 21:15:25.520459  110340 pv_controller.go:1195] deleteVolumeOperation [pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67]: success
I1009 21:15:25.522043  110340 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67: (1.413617ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44104]
I1009 21:15:25.522354  110340 pv_controller.go:1202] failed to delete volume "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" from database: persistentvolumes "pvc-b00c2de0-9076-48cb-b5b3-8a430a594f67" not found
I1009 21:15:25.534009  110340 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (13.02519ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.534293  110340 volume_binding_test.go:920] test cluster "volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f" start to tear down
I1009 21:15:25.536897  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pods: (1.899628ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.539154  110340 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/persistentvolumeclaims: (1.743196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.540748  110340 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.225875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.542532  110340 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (1.240918ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.543242  110340 pv_controller_base.go:305] Shutting down persistent volume controller
I1009 21:15:25.543263  110340 pv_controller_base.go:416] claim worker queue shutting down
I1009 21:15:25.543923  110340 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=57853&timeout=9m37s&timeoutSeconds=577&watch=true: (7.081862464s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43562]
I1009 21:15:25.544022  110340 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=57853&timeout=9m34s&timeoutSeconds=574&watch=true: (7.081007058s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43552]
I1009 21:15:25.544028  110340 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=58102&timeout=7m44s&timeoutSeconds=464&watch=true: (7.079897984s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43558]
I1009 21:15:25.544073  110340 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=57853&timeout=5m34s&timeoutSeconds=334&watch=true: (7.082385908s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43354]
I1009 21:15:25.544089  110340 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=57853&timeout=5m38s&timeoutSeconds=338&watch=true: (7.081526594s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43550]
I1009 21:15:25.544104  110340 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=57853&timeout=6m37s&timeoutSeconds=397&watch=true: (7.08053199s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43560]
I1009 21:15:25.544173  110340 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=57853&timeout=8m50s&timeoutSeconds=530&watch=true: (7.080750019s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43554]
I1009 21:15:25.544208  110340 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=57853&timeout=9m5s&timeoutSeconds=545&watch=true: (6.981044085s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I1009 21:15:25.544225  110340 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=57853&timeout=8m59s&timeoutSeconds=539&watch=true: (7.081324076s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43548]
I1009 21:15:25.544231  110340 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=57853&timeout=5m50s&timeoutSeconds=350&watch=true: (6.980914504s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43584]
I1009 21:15:25.544210  110340 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=57853&timeout=9m34s&timeoutSeconds=574&watch=true: (6.981675616s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43578]
I1009 21:15:25.544282  110340 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=57853&timeout=6m18s&timeoutSeconds=378&watch=true: (6.981626247s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43576]
I1009 21:15:25.544341  110340 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=57853&timeout=7m36s&timeoutSeconds=456&watch=true: (6.981181217s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43574]
I1009 21:15:25.544345  110340 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=57853&timeout=6m28s&timeoutSeconds=388&watch=true: (7.082580268s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43356]
I1009 21:15:25.544359  110340 pv_controller_base.go:359] volume worker queue shutting down
I1009 21:15:25.544386  110340 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=57853&timeout=5m21s&timeoutSeconds=321&watch=true: (7.079958571s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43556]
I1009 21:15:25.544998  110340 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=57853&timeout=5m10s&timeoutSeconds=310&watch=true: (7.082944219s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43564]
I1009 21:15:25.550429  110340 httplog.go:90] DELETE /api/v1/nodes: (7.418937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.550721  110340 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1009 21:15:25.553076  110340 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.955304ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
I1009 21:15:25.558755  110340 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.947229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44108]
--- FAIL: TestVolumeProvision (10.68s)
    volume_binding_test.go:1137: Provisoning annotaion on PVC volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind not bahaviors as expected: PVC volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pvc-w-canbind not expected to be provisioned, but found selected-node annotation
    volume_binding_test.go:1179: PV pv-w-canbind phase not Bound, got Available

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191009-210414.xml

Find volume-schedulingd5e6d881-e156-44fa-afb0-d8f9dc8f2b7f/pod-pvc-canprovision mentions in log files | View test history on testgrid


Show 2898 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 903 lines ...
W1009 20:59:12.132] I1009 20:59:11.830122   52867 shared_informer.go:197] Waiting for caches to sync for stateful set
W1009 20:59:12.133] I1009 20:59:11.830941   52867 controllermanager.go:534] Started "cronjob"
W1009 20:59:12.133] I1009 20:59:11.831184   52867 cronjob_controller.go:96] Starting CronJob Manager
W1009 20:59:12.133] I1009 20:59:11.831997   52867 core.go:212] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W1009 20:59:12.133] W1009 20:59:11.832252   52867 controllermanager.go:526] Skipping "route"
W1009 20:59:12.133] I1009 20:59:11.832990   52867 node_lifecycle_controller.go:77] Sending events to api server
W1009 20:59:12.133] E1009 20:59:11.833339   52867 core.go:202] failed to start cloud node lifecycle controller: no cloud provider provided
W1009 20:59:12.133] W1009 20:59:11.833736   52867 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W1009 20:59:12.133] I1009 20:59:11.834808   52867 controllermanager.go:534] Started "podgc"
W1009 20:59:12.134] I1009 20:59:11.835079   52867 gc_controller.go:75] Starting GC controller
W1009 20:59:12.134] I1009 20:59:11.835447   52867 shared_informer.go:197] Waiting for caches to sync for GC
W1009 20:59:12.134] I1009 20:59:11.839463   52867 controllermanager.go:534] Started "job"
W1009 20:59:12.134] I1009 20:59:11.839855   52867 job_controller.go:143] Starting job controller
W1009 20:59:12.134] I1009 20:59:11.839965   52867 shared_informer.go:197] Waiting for caches to sync for job
W1009 20:59:12.134] E1009 20:59:11.840369   52867 core.go:79] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1009 20:59:12.134] W1009 20:59:11.840451   52867 controllermanager.go:526] Skipping "service"
W1009 20:59:12.134] W1009 20:59:11.840486   52867 controllermanager.go:526] Skipping "ttl-after-finished"
W1009 20:59:12.135] I1009 20:59:11.841388   52867 controllermanager.go:534] Started "pvc-protection"
W1009 20:59:12.135] I1009 20:59:11.842928   52867 controllermanager.go:534] Started "horizontalpodautoscaling"
W1009 20:59:12.135] I1009 20:59:11.843527   52867 pvc_protection_controller.go:100] Starting PVC protection controller
W1009 20:59:12.135] I1009 20:59:11.843578   52867 shared_informer.go:197] Waiting for caches to sync for PVC protection
... skipping 10 lines ...
W1009 20:59:12.136] I1009 20:59:11.847450   52867 controllermanager.go:534] Started "attachdetach"
W1009 20:59:12.137] I1009 20:59:11.847636   52867 attach_detach_controller.go:323] Starting attach detach controller
W1009 20:59:12.137] I1009 20:59:11.847656   52867 shared_informer.go:197] Waiting for caches to sync for attach detach
W1009 20:59:12.137] I1009 20:59:11.847960   52867 controllermanager.go:534] Started "persistentvolume-expander"
W1009 20:59:12.137] I1009 20:59:11.847973   52867 expand_controller.go:308] Starting expand controller
W1009 20:59:12.137] I1009 20:59:11.847996   52867 shared_informer.go:197] Waiting for caches to sync for expand
W1009 20:59:12.137] W1009 20:59:11.880126   52867 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1009 20:59:12.137] I1009 20:59:11.908907   52867 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W1009 20:59:12.138] I1009 20:59:11.916337   52867 shared_informer.go:204] Caches are synced for PV protection 
W1009 20:59:12.138] E1009 20:59:11.916599   52867 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W1009 20:59:12.138] E1009 20:59:11.916720   52867 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1009 20:59:12.138] I1009 20:59:11.925112   52867 shared_informer.go:204] Caches are synced for namespace 
W1009 20:59:12.138] I1009 20:59:11.926575   52867 shared_informer.go:204] Caches are synced for service account 
W1009 20:59:12.138] I1009 20:59:11.928273   49326 controller.go:606] quota admission added evaluator for: serviceaccounts
W1009 20:59:12.139] I1009 20:59:11.928578   52867 shared_informer.go:204] Caches are synced for TTL 
W1009 20:59:12.139] I1009 20:59:11.944381   52867 shared_informer.go:204] Caches are synced for certificate-csrapproving 
W1009 20:59:12.139] I1009 20:59:11.948711   52867 shared_informer.go:204] Caches are synced for expand 
... skipping 81 lines ...
I1009 20:59:15.527] +++ working dir: /go/src/k8s.io/kubernetes
I1009 20:59:15.530] +++ command: run_RESTMapper_evaluation_tests
I1009 20:59:15.540] +++ [1009 20:59:15] Creating namespace namespace-1570654755-23621
I1009 20:59:15.613] namespace/namespace-1570654755-23621 created
I1009 20:59:15.682] Context "test" modified.
I1009 20:59:15.689] +++ [1009 20:59:15] Testing RESTMapper
I1009 20:59:15.794] +++ [1009 20:59:15] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1009 20:59:15.807] +++ exit code: 0
I1009 20:59:15.920] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1009 20:59:15.920] bindings                                                                      true         Binding
I1009 20:59:15.921] componentstatuses                 cs                                          false        ComponentStatus
I1009 20:59:15.921] configmaps                        cm                                          true         ConfigMap
I1009 20:59:15.921] endpoints                         ep                                          true         Endpoints
... skipping 317 lines ...
I1009 20:59:28.702] (Bcore.sh:79: Successful get pods/valid-pod {{.metadata.name}}: valid-pod
I1009 20:59:28.829] (Bcore.sh:81: Successful get pods {.items[*].metadata.name}: valid-pod
I1009 20:59:28.960] (Bcore.sh:82: Successful get pod valid-pod {.metadata.name}: valid-pod
I1009 20:59:29.145] (Bcore.sh:83: Successful get pod/valid-pod {.metadata.name}: valid-pod
I1009 20:59:29.277] (Bcore.sh:84: Successful get pods/valid-pod {.metadata.name}: valid-pod
I1009 20:59:29.424] (B
I1009 20:59:29.430] core.sh:86: FAIL!
I1009 20:59:29.430] Describe pods valid-pod
I1009 20:59:29.431]   Expected Match: Name:
I1009 20:59:29.431]   Not found in:
I1009 20:59:29.431] Name:         valid-pod
I1009 20:59:29.431] Namespace:    namespace-1570654767-23355
I1009 20:59:29.431] Priority:     0
... skipping 108 lines ...
I1009 20:59:29.926] QoS Class:        Guaranteed
I1009 20:59:29.926] Node-Selectors:   <none>
I1009 20:59:29.926] Tolerations:      <none>
I1009 20:59:29.926] Events:           <none>
I1009 20:59:29.926] (B
I1009 20:59:30.085] 
I1009 20:59:30.086] FAIL!
I1009 20:59:30.086] Describe pods
I1009 20:59:30.086]   Expected Match: Name:
I1009 20:59:30.086]   Not found in:
I1009 20:59:30.087] Name:         valid-pod
I1009 20:59:30.087] Namespace:    namespace-1570654767-23355
I1009 20:59:30.087] Priority:     0
... skipping 179 lines ...
I1009 20:59:36.782] (Bpoddisruptionbudget.policy/test-pdb-3 created
I1009 20:59:36.869] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1009 20:59:36.940] (Bpoddisruptionbudget.policy/test-pdb-4 created
I1009 20:59:37.034] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1009 20:59:37.190] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 20:59:37.363] (Bpod/env-test-pod created
W1009 20:59:37.464] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1009 20:59:37.465] error: setting 'all' parameter but found a non empty selector. 
W1009 20:59:37.465] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1009 20:59:37.465] I1009 20:59:36.461628   49326 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W1009 20:59:37.465] error: min-available and max-unavailable cannot be both specified
I1009 20:59:37.566] 
I1009 20:59:37.566] core.sh:264: FAIL!
I1009 20:59:37.566] Describe pods --namespace=test-kubectl-describe-pod env-test-pod
I1009 20:59:37.566]   Expected Match: TEST_CMD_1
I1009 20:59:37.567]   Not found in:
I1009 20:59:37.567] Name:         env-test-pod
I1009 20:59:37.567] Namespace:    test-kubectl-describe-pod
I1009 20:59:37.567] Priority:     0
... skipping 23 lines ...
I1009 20:59:37.570] Tolerations:       <none>
I1009 20:59:37.570] Events:            <none>
I1009 20:59:37.571] (B
I1009 20:59:37.571] 264 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1009 20:59:37.571] (B
I1009 20:59:37.571] 
I1009 20:59:37.571] FAIL!
I1009 20:59:37.571] Describe pods --namespace=test-kubectl-describe-pod
I1009 20:59:37.571]   Expected Match: TEST_CMD_1
I1009 20:59:37.571]   Not found in:
I1009 20:59:37.571] Name:         env-test-pod
I1009 20:59:37.572] Namespace:    test-kubectl-describe-pod
I1009 20:59:37.572] Priority:     0
... skipping 150 lines ...
I1009 20:59:50.555] (Bpod/valid-pod patched
I1009 20:59:50.656] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1009 20:59:50.741] (Bpod/valid-pod patched
I1009 20:59:50.843] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1009 20:59:51.029] (Bpod/valid-pod patched
I1009 20:59:51.138] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1009 20:59:51.314] (B+++ [1009 20:59:51] "kubectl patch with resourceVersion 498" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I1009 20:59:51.569] pod "valid-pod" deleted
I1009 20:59:51.581] pod/valid-pod replaced
I1009 20:59:51.682] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1009 20:59:51.844] (BSuccessful
I1009 20:59:51.844] message:error: --grace-period must have --force specified
I1009 20:59:51.844] has:\-\-grace-period must have \-\-force specified
I1009 20:59:52.004] Successful
I1009 20:59:52.004] message:error: --timeout must have --force specified
I1009 20:59:52.004] has:\-\-timeout must have \-\-force specified
I1009 20:59:52.161] node/node-v1-test created
W1009 20:59:52.262] W1009 20:59:52.161878   52867 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1009 20:59:52.363] node/node-v1-test replaced
I1009 20:59:52.439] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1009 20:59:52.520] (Bnode "node-v1-test" deleted
W1009 20:59:52.621] I1009 20:59:52.418524   52867 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"9d2b753d-a940-41df-a565-451e1782a542", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
I1009 20:59:52.723] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1009 20:59:52.895] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 67 lines ...
I1009 20:59:57.766] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 20:59:57.925] (Bpod/test-pod created
W1009 20:59:58.026] Edit cancelled, no changes made.
W1009 20:59:58.026] Edit cancelled, no changes made.
W1009 20:59:58.026] Edit cancelled, no changes made.
W1009 20:59:58.026] Edit cancelled, no changes made.
W1009 20:59:58.026] error: 'name' already has a value (valid-pod), and --overwrite is false
W1009 20:59:58.027] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1009 20:59:58.027] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W1009 20:59:58.027] I1009 20:59:57.418853   52867 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"9d2b753d-a940-41df-a565-451e1782a542", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-v1-test event: Removing Node node-v1-test from Controller
I1009 20:59:58.127] pod "test-pod" deleted
I1009 20:59:58.128] +++ [1009 20:59:58] Creating namespace namespace-1570654798-6122
I1009 20:59:58.184] namespace/namespace-1570654798-6122 created
... skipping 42 lines ...
I1009 21:00:01.533] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1009 21:00:01.536] +++ working dir: /go/src/k8s.io/kubernetes
I1009 21:00:01.539] +++ command: run_kubectl_create_error_tests
I1009 21:00:01.552] +++ [1009 21:00:01] Creating namespace namespace-1570654801-7869
I1009 21:00:01.626] namespace/namespace-1570654801-7869 created
I1009 21:00:01.710] Context "test" modified.
I1009 21:00:01.716] +++ [1009 21:00:01] Testing kubectl create with error
W1009 21:00:01.817] Error: must specify one of -f and -k
W1009 21:00:01.817] 
W1009 21:00:01.817] Create a resource from a file or from stdin.
W1009 21:00:01.817] 
W1009 21:00:01.817]  JSON and YAML formats are accepted.
W1009 21:00:01.817] 
W1009 21:00:01.818] Examples:
... skipping 41 lines ...
W1009 21:00:01.823] 
W1009 21:00:01.823] Usage:
W1009 21:00:01.823]   kubectl create -f FILENAME [options]
W1009 21:00:01.824] 
W1009 21:00:01.824] Use "kubectl <command> --help" for more information about a given command.
W1009 21:00:01.824] Use "kubectl options" for a list of global command-line options (applies to all commands).
I1009 21:00:01.956] +++ [1009 21:00:01] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1009 21:00:02.056] kubectl convert is DEPRECATED and will be removed in a future version.
W1009 21:00:02.057] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1009 21:00:02.159] +++ exit code: 0
I1009 21:00:02.187] Recording: run_kubectl_apply_tests
I1009 21:00:02.188] Running command: run_kubectl_apply_tests
I1009 21:00:02.209] 
... skipping 16 lines ...
I1009 21:00:03.752] apply.sh:289: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I1009 21:00:03.839] (Bpod "test-pod" deleted
I1009 21:00:04.067] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W1009 21:00:04.348] I1009 21:00:04.347359   49326 client.go:361] parsed scheme: "endpoint"
W1009 21:00:04.349] I1009 21:00:04.347408   49326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1009 21:00:04.352] I1009 21:00:04.352311   49326 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W1009 21:00:04.449] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1009 21:00:04.550] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I1009 21:00:04.567] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1009 21:00:04.595] +++ exit code: 0
I1009 21:00:04.632] Recording: run_kubectl_run_tests
I1009 21:00:04.632] Running command: run_kubectl_run_tests
I1009 21:00:04.655] 
... skipping 5 lines ...
I1009 21:00:04.833] Context "test" modified.
I1009 21:00:04.840] +++ [1009 21:00:04] Testing kubectl run
I1009 21:00:04.929] run.sh:29: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:00:05.021] (Bjob.batch/pi created
I1009 21:00:05.112] run.sh:33: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: pi:
I1009 21:00:05.206] (B
I1009 21:00:05.206] FAIL!
I1009 21:00:05.206] Describe pods
I1009 21:00:05.207]   Expected Match: Name:
I1009 21:00:05.207]   Not found in:
I1009 21:00:05.207] Name:           pi-9mn6w
I1009 21:00:05.208] Namespace:      namespace-1570654804-6387
I1009 21:00:05.208] Priority:       0
... skipping 84 lines ...
I1009 21:00:07.012] Context "test" modified.
I1009 21:00:07.018] +++ [1009 21:00:07] Testing kubectl create filter
I1009 21:00:07.102] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:00:07.287] (Bpod/selector-test-pod created
I1009 21:00:07.378] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1009 21:00:07.462] (BSuccessful
I1009 21:00:07.462] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1009 21:00:07.462] has:pods "selector-test-pod-dont-apply" not found
I1009 21:00:07.538] pod "selector-test-pod" deleted
I1009 21:00:07.558] +++ exit code: 0
I1009 21:00:07.589] Recording: run_kubectl_apply_deployments_tests
I1009 21:00:07.590] Running command: run_kubectl_apply_deployments_tests
I1009 21:00:07.610] 
... skipping 31 lines ...
W1009 21:00:09.701] I1009 21:00:09.607902   52867 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1570654807-20927", Name:"nginx", UID:"50fd7636-93e9-4160-bdeb-2d34908e555e", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W1009 21:00:09.702] I1009 21:00:09.612567   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654807-20927", Name:"nginx-8484dd655", UID:"8f47538c-de72-4e87-aaf1-71073b4f2c60", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-29l8w
W1009 21:00:09.702] I1009 21:00:09.615504   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654807-20927", Name:"nginx-8484dd655", UID:"8f47538c-de72-4e87-aaf1-71073b4f2c60", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-q5w22
W1009 21:00:09.702] I1009 21:00:09.617439   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654807-20927", Name:"nginx-8484dd655", UID:"8f47538c-de72-4e87-aaf1-71073b4f2c60", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-p74m8
I1009 21:00:09.803] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I1009 21:00:13.917] (BSuccessful
I1009 21:00:13.917] message:Error from server (Conflict): error when applying patch:
I1009 21:00:13.918] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1570654807-20927\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1009 21:00:13.918] to:
I1009 21:00:13.919] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I1009 21:00:13.919] Name: "nginx", Namespace: "namespace-1570654807-20927"
I1009 21:00:13.921] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1570654807-20927\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-10-09T21:00:09Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1570654807-20927" "resourceVersion":"593" "selfLink":"/apis/apps/v1/namespaces/namespace-1570654807-20927/deployments/nginx" "uid":"50fd7636-93e9-4160-bdeb-2d34908e555e"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-10-09T21:00:09Z" "lastUpdateTime":"2019-10-09T21:00:09Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-10-09T21:00:09Z" "lastUpdateTime":"2019-10-09T21:00:09Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I1009 21:00:13.921] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I1009 21:00:13.921] has:Error from server (Conflict)
W1009 21:00:15.899] I1009 21:00:15.898323   52867 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1570654798-7089
I1009 21:00:19.152] deployment.apps/nginx configured
I1009 21:00:19.246] Successful
I1009 21:00:19.246] message:        "name": "nginx2"
I1009 21:00:19.246]           "name": "nginx2"
I1009 21:00:19.246] has:"name": "nginx2"
... skipping 142 lines ...
I1009 21:00:26.319] +++ [1009 21:00:26] Creating namespace namespace-1570654826-21569
I1009 21:00:26.394] namespace/namespace-1570654826-21569 created
I1009 21:00:26.466] Context "test" modified.
I1009 21:00:26.473] +++ [1009 21:00:26] Testing kubectl get
I1009 21:00:26.561] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:00:26.651] (BSuccessful
I1009 21:00:26.651] message:Error from server (NotFound): pods "abc" not found
I1009 21:00:26.651] has:pods "abc" not found
I1009 21:00:26.742] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:00:26.829] (BSuccessful
I1009 21:00:26.829] message:Error from server (NotFound): pods "abc" not found
I1009 21:00:26.829] has:pods "abc" not found
I1009 21:00:26.915] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:00:26.997] (BSuccessful
I1009 21:00:26.997] message:{
I1009 21:00:26.997]     "apiVersion": "v1",
I1009 21:00:26.998]     "items": [],
... skipping 23 lines ...
I1009 21:00:27.326] has not:No resources found
I1009 21:00:27.406] Successful
I1009 21:00:27.407] message:NAME
I1009 21:00:27.407] has not:No resources found
I1009 21:00:27.492] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:00:27.586] (BSuccessful
I1009 21:00:27.587] message:error: the server doesn't have a resource type "foobar"
I1009 21:00:27.587] has not:No resources found
I1009 21:00:27.670] Successful
I1009 21:00:27.670] message:No resources found in namespace-1570654826-21569 namespace.
I1009 21:00:27.670] has:No resources found
I1009 21:00:27.753] Successful
I1009 21:00:27.754] message:
I1009 21:00:27.754] has not:No resources found
I1009 21:00:27.836] Successful
I1009 21:00:27.837] message:No resources found in namespace-1570654826-21569 namespace.
I1009 21:00:27.837] has:No resources found
I1009 21:00:27.920] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:00:27.999] (BSuccessful
I1009 21:00:27.999] message:Error from server (NotFound): pods "abc" not found
I1009 21:00:28.000] has:pods "abc" not found
I1009 21:00:28.001] FAIL!
I1009 21:00:28.001] message:Error from server (NotFound): pods "abc" not found
I1009 21:00:28.002] has not:List
I1009 21:00:28.002] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1009 21:00:28.106] Successful
I1009 21:00:28.106] message:I1009 21:00:28.065749   62659 loader.go:375] Config loaded from file:  /tmp/tmp.uRjugiaoRn/.kube/config
I1009 21:00:28.107] I1009 21:00:28.067292   62659 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I1009 21:00:28.107] I1009 21:00:28.084944   62659 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I1009 21:00:33.693] Successful
I1009 21:00:33.694] message:NAME    DATA   AGE
I1009 21:00:33.694] one     0      0s
I1009 21:00:33.694] three   0      0s
I1009 21:00:33.694] two     0      0s
I1009 21:00:33.694] STATUS    REASON          MESSAGE
I1009 21:00:33.694] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1009 21:00:33.695] has not:watch is only supported on individual resources
I1009 21:00:34.787] Successful
I1009 21:00:34.788] message:STATUS    REASON          MESSAGE
I1009 21:00:34.788] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1009 21:00:34.788] has not:watch is only supported on individual resources
I1009 21:00:34.793] +++ [1009 21:00:34] Creating namespace namespace-1570654834-6868
I1009 21:00:34.867] namespace/namespace-1570654834-6868 created
I1009 21:00:34.942] Context "test" modified.
I1009 21:00:35.036] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:00:35.187] (Bpod/valid-pod created
... skipping 56 lines ...
I1009 21:00:35.277] }
I1009 21:00:35.363] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1009 21:00:35.605] (B<no value>Successful
I1009 21:00:35.605] message:valid-pod:
I1009 21:00:35.605] has:valid-pod:
I1009 21:00:35.686] Successful
I1009 21:00:35.686] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1009 21:00:35.686] 	template was:
I1009 21:00:35.686] 		{.missing}
I1009 21:00:35.686] 	object given to jsonpath engine was:
I1009 21:00:35.687] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-10-09T21:00:35Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1570654834-6868", "resourceVersion":"695", "selfLink":"/api/v1/namespaces/namespace-1570654834-6868/pods/valid-pod", "uid":"cfdb2da0-af12-4668-a7f4-2b14e5dc9436"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I1009 21:00:35.687] has:missing is not found
I1009 21:00:35.768] Successful
I1009 21:00:35.769] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1009 21:00:35.769] 	template was:
I1009 21:00:35.769] 		{{.missing}}
I1009 21:00:35.769] 	raw data was:
I1009 21:00:35.770] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-10-09T21:00:35Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1570654834-6868","resourceVersion":"695","selfLink":"/api/v1/namespaces/namespace-1570654834-6868/pods/valid-pod","uid":"cfdb2da0-af12-4668-a7f4-2b14e5dc9436"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1009 21:00:35.770] 	object given to template engine was:
I1009 21:00:35.771] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-10-09T21:00:35Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1570654834-6868 resourceVersion:695 selfLink:/api/v1/namespaces/namespace-1570654834-6868/pods/valid-pod uid:cfdb2da0-af12-4668-a7f4-2b14e5dc9436] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I1009 21:00:35.771] has:map has no entry for key "missing"
W1009 21:00:35.871] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I1009 21:00:36.854] Successful
I1009 21:00:36.854] message:NAME        READY   STATUS    RESTARTS   AGE
I1009 21:00:36.854] valid-pod   0/1     Pending   0          0s
I1009 21:00:36.855] STATUS      REASON          MESSAGE
I1009 21:00:36.855] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1009 21:00:36.855] has:STATUS
I1009 21:00:36.856] Successful
I1009 21:00:36.856] message:NAME        READY   STATUS    RESTARTS   AGE
I1009 21:00:36.856] valid-pod   0/1     Pending   0          0s
I1009 21:00:36.856] STATUS      REASON          MESSAGE
I1009 21:00:36.856] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1009 21:00:36.857] has:valid-pod
I1009 21:00:37.934] Successful
I1009 21:00:37.934] message:pod/valid-pod
I1009 21:00:37.934] has not:STATUS
I1009 21:00:37.935] Successful
I1009 21:00:37.936] message:pod/valid-pod
... skipping 72 lines ...
I1009 21:00:39.031] status:
I1009 21:00:39.031]   phase: Pending
I1009 21:00:39.031]   qosClass: Guaranteed
I1009 21:00:39.031] ---
I1009 21:00:39.031] has:name: valid-pod
I1009 21:00:39.119] Successful
I1009 21:00:39.119] message:Error from server (NotFound): pods "invalid-pod" not found
I1009 21:00:39.119] has:"invalid-pod" not found
I1009 21:00:39.205] pod "valid-pod" deleted
I1009 21:00:39.303] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:00:39.475] (Bpod/redis-master created
I1009 21:00:39.479] pod/valid-pod created
I1009 21:00:39.576] Successful
... skipping 35 lines ...
I1009 21:00:40.766] +++ command: run_kubectl_exec_pod_tests
I1009 21:00:40.778] +++ [1009 21:00:40] Creating namespace namespace-1570654840-29128
I1009 21:00:40.857] namespace/namespace-1570654840-29128 created
I1009 21:00:40.935] Context "test" modified.
I1009 21:00:40.941] +++ [1009 21:00:40] Testing kubectl exec POD COMMAND
I1009 21:00:41.030] Successful
I1009 21:00:41.031] message:Error from server (NotFound): pods "abc" not found
I1009 21:00:41.031] has:pods "abc" not found
I1009 21:00:41.181] pod/test-pod created
I1009 21:00:41.291] Successful
I1009 21:00:41.291] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1009 21:00:41.291] has not:pods "test-pod" not found
I1009 21:00:41.293] Successful
I1009 21:00:41.293] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1009 21:00:41.293] has not:pod or type/name must be specified
I1009 21:00:41.377] pod "test-pod" deleted
I1009 21:00:41.396] +++ exit code: 0
I1009 21:00:41.432] Recording: run_kubectl_exec_resource_name_tests
I1009 21:00:41.432] Running command: run_kubectl_exec_resource_name_tests
I1009 21:00:41.456] 
... skipping 2 lines ...
I1009 21:00:41.464] +++ command: run_kubectl_exec_resource_name_tests
I1009 21:00:41.475] +++ [1009 21:00:41] Creating namespace namespace-1570654841-14547
I1009 21:00:41.565] namespace/namespace-1570654841-14547 created
I1009 21:00:41.642] Context "test" modified.
I1009 21:00:41.649] +++ [1009 21:00:41] Testing kubectl exec TYPE/NAME COMMAND
I1009 21:00:41.770] Successful
I1009 21:00:41.770] message:error: the server doesn't have a resource type "foo"
I1009 21:00:41.770] has:error:
I1009 21:00:41.860] Successful
I1009 21:00:41.861] message:Error from server (NotFound): deployments.apps "bar" not found
I1009 21:00:41.861] has:"bar" not found
I1009 21:00:42.027] pod/test-pod created
I1009 21:00:42.191] replicaset.apps/frontend created
W1009 21:00:42.292] I1009 21:00:42.204406   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654841-14547", Name:"frontend", UID:"3a5ef132-f350-4e34-94c6-089478d4cdb5", APIVersion:"apps/v1", ResourceVersion:"749", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-q2k5x
W1009 21:00:42.293] I1009 21:00:42.209339   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654841-14547", Name:"frontend", UID:"3a5ef132-f350-4e34-94c6-089478d4cdb5", APIVersion:"apps/v1", ResourceVersion:"749", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hm2x8
W1009 21:00:42.294] I1009 21:00:42.211564   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654841-14547", Name:"frontend", UID:"3a5ef132-f350-4e34-94c6-089478d4cdb5", APIVersion:"apps/v1", ResourceVersion:"749", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4bbdc
I1009 21:00:42.394] configmap/test-set-env-config created
I1009 21:00:42.488] Successful
I1009 21:00:42.489] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I1009 21:00:42.489] has:not implemented
I1009 21:00:42.584] Successful
I1009 21:00:42.584] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1009 21:00:42.585] has not:not found
I1009 21:00:42.586] Successful
I1009 21:00:42.586] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1009 21:00:42.586] has not:pod or type/name must be specified
I1009 21:00:42.701] Successful
I1009 21:00:42.701] message:Error from server (BadRequest): pod frontend-4bbdc does not have a host assigned
I1009 21:00:42.701] has not:not found
I1009 21:00:42.704] Successful
I1009 21:00:42.704] message:Error from server (BadRequest): pod frontend-4bbdc does not have a host assigned
I1009 21:00:42.704] has not:pod or type/name must be specified
I1009 21:00:42.792] pod "test-pod" deleted
I1009 21:00:42.877] replicaset.apps "frontend" deleted
I1009 21:00:42.963] configmap "test-set-env-config" deleted
I1009 21:00:42.982] +++ exit code: 0
I1009 21:00:43.016] Recording: run_create_secret_tests
I1009 21:00:43.016] Running command: run_create_secret_tests
I1009 21:00:43.038] 
I1009 21:00:43.041] +++ Running case: test-cmd.run_create_secret_tests 
I1009 21:00:43.043] +++ working dir: /go/src/k8s.io/kubernetes
I1009 21:00:43.047] +++ command: run_create_secret_tests
I1009 21:00:43.141] Successful
I1009 21:00:43.142] message:Error from server (NotFound): secrets "mysecret" not found
I1009 21:00:43.142] has:secrets "mysecret" not found
I1009 21:00:43.301] Successful
I1009 21:00:43.301] message:Error from server (NotFound): secrets "mysecret" not found
I1009 21:00:43.301] has:secrets "mysecret" not found
I1009 21:00:43.303] Successful
I1009 21:00:43.303] message:user-specified
I1009 21:00:43.303] has:user-specified
I1009 21:00:43.375] Successful
I1009 21:00:43.448] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"2a9c5f2c-8c96-4901-a7a0-9f20f34bf4c7","resourceVersion":"770","creationTimestamp":"2019-10-09T21:00:43Z"}}
... skipping 2 lines ...
I1009 21:00:43.618] has:uid
I1009 21:00:43.689] Successful
I1009 21:00:43.690] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"2a9c5f2c-8c96-4901-a7a0-9f20f34bf4c7","resourceVersion":"771","creationTimestamp":"2019-10-09T21:00:43Z"},"data":{"key1":"config1"}}
I1009 21:00:43.690] has:config1
I1009 21:00:43.761] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"2a9c5f2c-8c96-4901-a7a0-9f20f34bf4c7"}}
I1009 21:00:43.849] Successful
I1009 21:00:43.850] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I1009 21:00:43.850] has:configmaps "tester-update-cm" not found
I1009 21:00:43.863] +++ exit code: 0
I1009 21:00:43.894] Recording: run_kubectl_create_kustomization_directory_tests
I1009 21:00:43.895] Running command: run_kubectl_create_kustomization_directory_tests
I1009 21:00:43.915] 
I1009 21:00:43.918] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
W1009 21:00:46.517] I1009 21:00:44.374518   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654841-14547", Name:"test-the-deployment-69fdbb5f7d", UID:"2e521da3-9d7e-48c7-a6a3-645c4e1f6781", APIVersion:"apps/v1", ResourceVersion:"779", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-kb4x2
W1009 21:00:46.517] I1009 21:00:44.376739   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654841-14547", Name:"test-the-deployment-69fdbb5f7d", UID:"2e521da3-9d7e-48c7-a6a3-645c4e1f6781", APIVersion:"apps/v1", ResourceVersion:"779", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-94nk9
I1009 21:00:47.499] Successful
I1009 21:00:47.499] message:NAME        READY   STATUS    RESTARTS   AGE
I1009 21:00:47.500] valid-pod   0/1     Pending   0          0s
I1009 21:00:47.500] STATUS      REASON          MESSAGE
I1009 21:00:47.500] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1009 21:00:47.500] has:Timeout exceeded while reading body
I1009 21:00:47.580] Successful
I1009 21:00:47.580] message:NAME        READY   STATUS    RESTARTS   AGE
I1009 21:00:47.581] valid-pod   0/1     Pending   0          1s
I1009 21:00:47.581] has:valid-pod
I1009 21:00:47.651] Successful
I1009 21:00:47.651] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1009 21:00:47.652] has:Invalid timeout value
I1009 21:00:47.734] pod "valid-pod" deleted
I1009 21:00:47.753] +++ exit code: 0
I1009 21:00:47.785] Recording: run_crd_tests
I1009 21:00:47.785] Running command: run_crd_tests
I1009 21:00:47.805] 
... skipping 158 lines ...
I1009 21:00:52.886] foo.company.com/test patched
I1009 21:00:52.981] crd.sh:236: Successful get foos/test {{.patched}}: value1
I1009 21:00:53.064] (Bfoo.company.com/test patched
I1009 21:00:53.156] crd.sh:238: Successful get foos/test {{.patched}}: value2
I1009 21:00:53.242] (Bfoo.company.com/test patched
I1009 21:00:53.337] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I1009 21:00:53.504] (B+++ [1009 21:00:53] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1009 21:00:53.569] {
I1009 21:00:53.569]     "apiVersion": "company.com/v1",
I1009 21:00:53.569]     "kind": "Foo",
I1009 21:00:53.570]     "metadata": {
I1009 21:00:53.570]         "annotations": {
I1009 21:00:53.570]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 192 lines ...
I1009 21:01:20.850] crd.sh:455: Successful get bars {{len .items}}: 1
I1009 21:01:20.936] (Bnamespace "non-native-resources" deleted
I1009 21:01:26.169] crd.sh:458: Successful get bars {{len .items}}: 0
I1009 21:01:26.339] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I1009 21:01:26.439] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I1009 21:01:26.539] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
W1009 21:01:26.639] Error from server (NotFound): namespaces "non-native-resources" not found
I1009 21:01:26.740] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I1009 21:01:26.740] +++ exit code: 0
I1009 21:01:26.740] Recording: run_cmd_with_img_tests
I1009 21:01:26.741] Running command: run_cmd_with_img_tests
I1009 21:01:26.741] 
I1009 21:01:26.741] +++ Running case: test-cmd.run_cmd_with_img_tests 
... skipping 8 lines ...
W1009 21:01:27.017] I1009 21:01:27.007969   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654886-25089", Name:"test1-6cdffdb5b8", UID:"5e59685e-7810-4bde-8fc9-f3239c4ecdac", APIVersion:"apps/v1", ResourceVersion:"928", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-kvkzr
I1009 21:01:27.117] Successful
I1009 21:01:27.118] message:deployment.apps/test1 created
I1009 21:01:27.118] has:deployment.apps/test1 created
I1009 21:01:27.119] deployment.apps "test1" deleted
I1009 21:01:27.187] Successful
I1009 21:01:27.188] message:error: Invalid image name "InvalidImageName": invalid reference format
I1009 21:01:27.189] has:error: Invalid image name "InvalidImageName": invalid reference format
I1009 21:01:27.201] +++ exit code: 0
I1009 21:01:27.240] +++ [1009 21:01:27] Testing recursive resources
I1009 21:01:27.245] +++ [1009 21:01:27] Creating namespace namespace-1570654887-11106
I1009 21:01:27.321] namespace/namespace-1570654887-11106 created
I1009 21:01:27.395] Context "test" modified.
I1009 21:01:27.492] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:01:27.774] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:27.776] (BSuccessful
I1009 21:01:27.777] message:pod/busybox0 created
I1009 21:01:27.777] pod/busybox1 created
I1009 21:01:27.778] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1009 21:01:27.778] has:error validating data: kind not set
I1009 21:01:27.872] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:28.066] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1009 21:01:28.069] (BSuccessful
I1009 21:01:28.069] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:28.069] has:Object 'Kind' is missing
I1009 21:01:28.160] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:28.459] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1009 21:01:28.461] (BSuccessful
I1009 21:01:28.461] message:pod/busybox0 replaced
I1009 21:01:28.461] pod/busybox1 replaced
I1009 21:01:28.462] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1009 21:01:28.462] has:error validating data: kind not set
I1009 21:01:28.552] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:28.649] (BSuccessful
I1009 21:01:28.649] message:Name:         busybox0
I1009 21:01:28.649] Namespace:    namespace-1570654887-11106
I1009 21:01:28.649] Priority:     0
I1009 21:01:28.649] Node:         <none>
... skipping 159 lines ...
I1009 21:01:28.665] has:Object 'Kind' is missing
I1009 21:01:28.753] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:28.950] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1009 21:01:28.952] (BSuccessful
I1009 21:01:28.952] message:pod/busybox0 annotated
I1009 21:01:28.953] pod/busybox1 annotated
I1009 21:01:28.953] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:28.953] has:Object 'Kind' is missing
W1009 21:01:29.054] W1009 21:01:27.347745   49326 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1009 21:01:29.054] E1009 21:01:27.349350   52867 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.055] W1009 21:01:27.447774   49326 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1009 21:01:29.055] E1009 21:01:27.449237   52867 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.055] W1009 21:01:27.546462   49326 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1009 21:01:29.055] E1009 21:01:27.552860   52867 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.056] W1009 21:01:27.651629   49326 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1009 21:01:29.056] E1009 21:01:27.653067   52867 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.056] E1009 21:01:28.350881   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.056] E1009 21:01:28.450417   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.056] E1009 21:01:28.554657   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.057] E1009 21:01:28.654719   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1009 21:01:29.159] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:29.342] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1009 21:01:29.345] (BSuccessful
I1009 21:01:29.345] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1009 21:01:29.346] pod/busybox0 configured
I1009 21:01:29.346] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1009 21:01:29.346] pod/busybox1 configured
I1009 21:01:29.346] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1009 21:01:29.347] has:error validating data: kind not set
I1009 21:01:29.445] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:01:29.606] (Bdeployment.apps/nginx created
W1009 21:01:29.707] E1009 21:01:29.352460   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.708] E1009 21:01:29.452297   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.708] E1009 21:01:29.556274   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:29.708] I1009 21:01:29.611022   52867 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1570654887-11106", Name:"nginx", UID:"1b2431f6-da38-4a88-ad92-900cdac1fc99", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1009 21:01:29.709] I1009 21:01:29.614973   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654887-11106", Name:"nginx-f87d999f7", UID:"62b76d23-6163-49d8-bce6-e883ec27d6cb", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-w2wg2
W1009 21:01:29.710] I1009 21:01:29.618249   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654887-11106", Name:"nginx-f87d999f7", UID:"62b76d23-6163-49d8-bce6-e883ec27d6cb", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-dk454
W1009 21:01:29.710] I1009 21:01:29.619882   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654887-11106", Name:"nginx-f87d999f7", UID:"62b76d23-6163-49d8-bce6-e883ec27d6cb", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-w69p8
W1009 21:01:29.710] E1009 21:01:29.656376   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1009 21:01:29.811] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1009 21:01:29.822] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1009 21:01:30.003] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I1009 21:01:30.006] (BSuccessful
I1009 21:01:30.006] message:apiVersion: extensions/v1beta1
I1009 21:01:30.007] kind: Deployment
... skipping 40 lines ...
I1009 21:01:30.094] deployment.apps "nginx" deleted
I1009 21:01:30.198] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:30.397] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:30.400] (BSuccessful
I1009 21:01:30.400] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1009 21:01:30.401] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1009 21:01:30.401] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:30.401] has:Object 'Kind' is missing
W1009 21:01:30.501] kubectl convert is DEPRECATED and will be removed in a future version.
W1009 21:01:30.502] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1009 21:01:30.502] E1009 21:01:30.354374   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:30.503] E1009 21:01:30.456814   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:30.558] E1009 21:01:30.557721   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:30.658] E1009 21:01:30.657880   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1009 21:01:30.759] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:30.759] (BSuccessful
I1009 21:01:30.760] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:30.760] has:busybox0:busybox1:
I1009 21:01:30.760] Successful
I1009 21:01:30.761] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:30.761] has:Object 'Kind' is missing
I1009 21:01:30.761] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:30.831] (Bpod/busybox0 labeled
I1009 21:01:30.832] pod/busybox1 labeled
I1009 21:01:30.832] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:30.938] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1009 21:01:30.940] (BSuccessful
I1009 21:01:30.940] message:pod/busybox0 labeled
I1009 21:01:30.941] pod/busybox1 labeled
I1009 21:01:30.941] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:30.941] has:Object 'Kind' is missing
I1009 21:01:31.037] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:31.148] (Bpod/busybox0 patched
I1009 21:01:31.148] pod/busybox1 patched
I1009 21:01:31.148] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:31.243] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1009 21:01:31.247] (BSuccessful
I1009 21:01:31.247] message:pod/busybox0 patched
I1009 21:01:31.247] pod/busybox1 patched
I1009 21:01:31.248] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:31.248] has:Object 'Kind' is missing
I1009 21:01:31.344] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:31.550] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:01:31.552] (BSuccessful
I1009 21:01:31.553] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1009 21:01:31.553] pod "busybox0" force deleted
I1009 21:01:31.553] pod "busybox1" force deleted
I1009 21:01:31.553] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1009 21:01:31.553] has:Object 'Kind' is missing
I1009 21:01:31.652] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:01:31.812] (Breplicationcontroller/busybox0 created
I1009 21:01:31.816] replicationcontroller/busybox1 created
W1009 21:01:31.917] I1009 21:01:31.064681   52867 namespace_controller.go:185] Namespace has been deleted non-native-resources
W1009 21:01:31.917] E1009 21:01:31.356080   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:31.917] E1009 21:01:31.458470   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:31.917] E1009 21:01:31.559351   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:31.918] E1009 21:01:31.660243   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:31.918] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1009 21:01:31.918] I1009 21:01:31.816312   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570654887-11106", Name:"busybox0", UID:"4adf8027-69f2-43a4-b555-94c72d012ebe", APIVersion:"v1", ResourceVersion:"985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-b9k8f
W1009 21:01:31.919] I1009 21:01:31.820254   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570654887-11106", Name:"busybox1", UID:"ffb2f912-e8f4-452b-91a4-d85d74d36531", APIVersion:"v1", ResourceVersion:"987", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-j7n2t
I1009 21:01:32.019] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:32.022] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:32.115] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I1009 21:01:32.210] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I1009 21:01:32.401] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1009 21:01:32.502] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1009 21:01:32.504] (BSuccessful
I1009 21:01:32.505] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1009 21:01:32.505] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1009 21:01:32.506] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1009 21:01:32.506] has:Object 'Kind' is missing
I1009 21:01:32.591] horizontalpodautoscaler.autoscaling "busybox0" deleted
W1009 21:01:32.691] E1009 21:01:32.357692   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:32.692] E1009 21:01:32.460088   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:32.692] E1009 21:01:32.561140   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:32.693] E1009 21:01:32.661495   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1009 21:01:32.793] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1009 21:01:32.796] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:32.889] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I1009 21:01:32.985] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I1009 21:01:33.187] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1009 21:01:33.290] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1009 21:01:33.293] (BSuccessful
I1009 21:01:33.293] message:service/busybox0 exposed
I1009 21:01:33.293] service/busybox1 exposed
I1009 21:01:33.294] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1009 21:01:33.294] has:Object 'Kind' is missing
I1009 21:01:33.389] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:33.482] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I1009 21:01:33.569] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I1009 21:01:33.764] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I1009 21:01:33.856] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I1009 21:01:33.859] (BSuccessful
I1009 21:01:33.859] message:replicationcontroller/busybox0 scaled
I1009 21:01:33.860] replicationcontroller/busybox1 scaled
I1009 21:01:33.860] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1009 21:01:33.860] has:Object 'Kind' is missing
I1009 21:01:33.948] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:34.123] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:01:34.125] (BSuccessful
I1009 21:01:34.126] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1009 21:01:34.126] replicationcontroller "busybox0" force deleted
I1009 21:01:34.126] replicationcontroller "busybox1" force deleted
I1009 21:01:34.126] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1009 21:01:34.127] has:Object 'Kind' is missing
I1009 21:01:34.216] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:01:34.376] (Bdeployment.apps/nginx1-deployment created
I1009 21:01:34.382] deployment.apps/nginx0-deployment created
I1009 21:01:34.483] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1009 21:01:34.575] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1009 21:01:34.773] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1009 21:01:34.775] (BSuccessful
I1009 21:01:34.775] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I1009 21:01:34.775] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I1009 21:01:34.776] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1009 21:01:34.776] has:Object 'Kind' is missing
I1009 21:01:34.862] deployment.apps/nginx1-deployment paused
I1009 21:01:34.865] deployment.apps/nginx0-deployment paused
I1009 21:01:34.963] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1009 21:01:34.965] (BSuccessful
I1009 21:01:34.965] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I1009 21:01:35.254] 1         <none>
I1009 21:01:35.254] 
I1009 21:01:35.255] deployment.apps/nginx0-deployment 
I1009 21:01:35.255] REVISION  CHANGE-CAUSE
I1009 21:01:35.255] 1         <none>
I1009 21:01:35.255] 
I1009 21:01:35.256] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1009 21:01:35.256] has:nginx0-deployment
I1009 21:01:35.256] Successful
I1009 21:01:35.256] message:deployment.apps/nginx1-deployment 
I1009 21:01:35.256] REVISION  CHANGE-CAUSE
I1009 21:01:35.256] 1         <none>
I1009 21:01:35.257] 
I1009 21:01:35.257] deployment.apps/nginx0-deployment 
I1009 21:01:35.257] REVISION  CHANGE-CAUSE
I1009 21:01:35.257] 1         <none>
I1009 21:01:35.257] 
I1009 21:01:35.258] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1009 21:01:35.258] has:nginx1-deployment
I1009 21:01:35.258] Successful
I1009 21:01:35.259] message:deployment.apps/nginx1-deployment 
I1009 21:01:35.259] REVISION  CHANGE-CAUSE
I1009 21:01:35.259] 1         <none>
I1009 21:01:35.259] 
I1009 21:01:35.260] deployment.apps/nginx0-deployment 
I1009 21:01:35.260] REVISION  CHANGE-CAUSE
I1009 21:01:35.260] 1         <none>
I1009 21:01:35.260] 
I1009 21:01:35.261] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1009 21:01:35.261] has:Object 'Kind' is missing
I1009 21:01:35.358] deployment.apps "nginx1-deployment" force deleted
I1009 21:01:35.365] deployment.apps "nginx0-deployment" force deleted
W1009 21:01:35.466] E1009 21:01:33.361016   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.466] E1009 21:01:33.461369   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.467] E1009 21:01:33.562503   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.467] I1009 21:01:33.662242   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570654887-11106", Name:"busybox0", UID:"4adf8027-69f2-43a4-b555-94c72d012ebe", APIVersion:"v1", ResourceVersion:"1006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-jksqp
W1009 21:01:35.467] E1009 21:01:33.663175   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.468] I1009 21:01:33.672921   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570654887-11106", Name:"busybox1", UID:"ffb2f912-e8f4-452b-91a4-d85d74d36531", APIVersion:"v1", ResourceVersion:"1010", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-6v5jg
W1009 21:01:35.468] E1009 21:01:34.362461   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.468] I1009 21:01:34.381076   52867 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1570654887-11106", Name:"nginx1-deployment", UID:"80819edc-f428-4818-a24f-77292b605799", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W1009 21:01:35.468] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1009 21:01:35.469] I1009 21:01:34.384775   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654887-11106", Name:"nginx1-deployment-7bdbbfb5cf", UID:"e34a18d6-e8e0-46dd-a87a-fe12740ad6eb", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-69kj7
W1009 21:01:35.469] I1009 21:01:34.384927   52867 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1570654887-11106", Name:"nginx0-deployment", UID:"69dbaf73-e25b-491f-bf7f-0bc9f02d413d", APIVersion:"apps/v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W1009 21:01:35.469] I1009 21:01:34.388263   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654887-11106", Name:"nginx1-deployment-7bdbbfb5cf", UID:"e34a18d6-e8e0-46dd-a87a-fe12740ad6eb", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-lt54g
W1009 21:01:35.470] I1009 21:01:34.389141   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654887-11106", Name:"nginx0-deployment-57c6bff7f6", UID:"f6b34ca2-ffce-4166-bed4-07ee595e6352", APIVersion:"apps/v1", ResourceVersion:"1031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-hxppb
W1009 21:01:35.470] I1009 21:01:34.392282   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570654887-11106", Name:"nginx0-deployment-57c6bff7f6", UID:"f6b34ca2-ffce-4166-bed4-07ee595e6352", APIVersion:"apps/v1", ResourceVersion:"1031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-vw49x
W1009 21:01:35.470] E1009 21:01:34.462761   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.471] E1009 21:01:34.563694   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.471] E1009 21:01:34.664508   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.471] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1009 21:01:35.471] E1009 21:01:35.363802   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.472] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W1009 21:01:35.472] E1009 21:01:35.464030   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.565] E1009 21:01:35.565320   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:35.666] E1009 21:01:35.666142   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:36.367] E1009 21:01:36.366682   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:36.466] E1009 21:01:36.465414   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1009 21:01:36.566] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1009 21:01:36.621] (Breplicationcontroller/busybox0 created
I1009 21:01:36.626] replicationcontroller/busybox1 created
I1009 21:01:36.730] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1009 21:01:36.822] (BSuccessful
I1009 21:01:36.823] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I1009 21:01:36.825] message:no rollbacker has been implemented for "ReplicationController"
I1009 21:01:36.825] no rollbacker has been implemented for "ReplicationController"
I1009 21:01:36.826] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1009 21:01:36.826] has:Object 'Kind' is missing
I1009 21:01:36.920] Successful
I1009 21:01:36.921] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1009 21:01:36.921] error: replicationcontrollers "busybox0" pausing is not supported
I1009 21:01:36.921] error: replicationcontrollers "busybox1" pausing is not supported
I1009 21:01:36.921] has:Object 'Kind' is missing
I1009 21:01:36.923] Successful
I1009 21:01:36.924] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1009 21:01:36.924] error: replicationcontrollers "busybox0" pausing is not supported
I1009 21:01:36.924] error: replicationcontrollers "busybox1" pausing is not supported
I1009 21:01:36.924] has:replicationcontrollers "busybox0" pausing is not supported
I1009 21:01:36.926] Successful
I1009 21:01:36.926] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1009 21:01:36.927] error: replicationcontrollers "busybox0" pausing is not supported
I1009 21:01:36.927] error: replicationcontrollers "busybox1" pausing is not supported
I1009 21:01:36.927] has:replicationcontrollers "busybox1" pausing is not supported
W1009 21:01:37.028] E1009 21:01:36.566951   52867 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1009 21:01:37.028] I1009 21:01:36.625212   52867 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570654887-11106", Name:"busybox0", UID:"55b54a0d-c70a-46f3-bae3-216f5a3edf04", APIVersion:"v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-bkzgt
W1009 21:01:37.029] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml":