This job view page is being replaced by Spyglass soon. Check out the new job view.
PRhpandeycodeit: Fixed static check failures for pkg/volume/, pkg/kubelet, pkg/util and cmd/kubelet
ResultFAILURE
Tests 2 failed / 2861 succeeded
Started2019-09-11 18:53
Elapsed26m39s
Revision
Buildergke-prow-ssd-pool-1a225945-5dqn
Refs master:001f2cd2
81688:2c419916
pod44fccc34-d4c5-11e9-a582-8a06e185f399
infra-commit72663f1bb
pod44fccc34-d4c5-11e9-a582-8a06e185f399
repok8s.io/kubernetes
repo-commit52d6112d194d3e0b2deaeaf07f3f96fb4f8ff6e3
repos{u'k8s.io/kubernetes': u'master:001f2cd2b553d06028c8542c8817820ee05d657f,81688:2c419916fd5a1a8bd75154d25f9859261b3a67a5'}

Test Failures


k8s.io/kubernetes/test/integration/examples TestAggregatedAPIServer 12s

go test -v k8s.io/kubernetes/test/integration/examples -run TestAggregatedAPIServer$
=== RUN   TestAggregatedAPIServer
I0911 19:11:53.748142  107326 serving.go:312] Generated self-signed cert (/tmp/test-integration-apiserver069430921/apiserver.crt, /tmp/test-integration-apiserver069430921/apiserver.key)
I0911 19:11:53.748176  107326 server.go:623] external host was not specified, using 172.17.0.2
W0911 19:11:55.877839  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.877875  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.877885  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.878319  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.878340  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.878348  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.878356  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.878390  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.879598  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.879769  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.879893  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.879978  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.880226  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.880467  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.880573  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:11:55.880739  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 19:11:55.880840  107326 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0911 19:11:55.880927  107326 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0911 19:11:55.881085  107326 master.go:259] Using reconciler: lease
I0911 19:11:55.881463  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.881606  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.882913  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.882944  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.885678  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.885715  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.886887  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.886926  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.888443  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.888476  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.896554  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.896599  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.899090  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.899536  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.901350  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.901418  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.907416  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.907464  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.910010  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.910076  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.912240  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.912274  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.916000  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.916248  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.918475  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.918640  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.921040  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.921212  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.923026  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.923152  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.929404  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.929451  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.931309  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.931401  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.933524  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.933582  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.934835  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:55.934861  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:55.935760  107326 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0911 19:11:56.219475  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.219528  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.220711  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.220772  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.222633  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.222668  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.224539  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.224573  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.226866  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.226904  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.230122  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.230419  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.231886  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.231918  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.234152  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.234183  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.235594  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.235626  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.237164  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.237261  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.241174  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.241212  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.243233  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.243269  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.246418  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.246551  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.248594  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.248632  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.252953  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.253132  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.255339  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.255530  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.257777  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.257817  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.260326  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.260380  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.263684  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.263800  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.267053  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.267205  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.269239  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.269275  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.271208  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.271251  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.275904  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.275938  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.277721  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.277756  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.279550  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.279680  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.307556  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.307754  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.309467  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.309601  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.311834  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.311867  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.314610  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.314725  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.316348  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.316394  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.320598  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.320636  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.322806  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.322894  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.325608  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.325742  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.327013  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.327045  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.331911  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.331943  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.334529  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.334558  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.336588  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.336687  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.340572  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.340602  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.341723  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.341927  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:56.343425  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.343455  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0911 19:11:56.574682  107326 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W0911 19:11:56.610908  107326 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0911 19:11:56.646053  107326 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0911 19:11:56.652897  107326 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0911 19:11:56.680312  107326 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0911 19:11:56.719043  107326 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0911 19:11:56.719068  107326 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0911 19:11:56.878464  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:11:56.878644  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:11:57.797034  107326 secure_serving.go:123] Serving securely on 127.0.0.1:46827
E0911 19:11:57.802313  107326 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
I0911 19:11:58.824302  107326 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0911 19:12:00.209090  107326 serving.go:312] Generated self-signed cert (/tmp/test-integration-wardle-server941466061/apiserver.crt, /tmp/test-integration-wardle-server941466061/apiserver.key)
W0911 19:12:00.988147  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:12:00.988418  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:12:00.988618  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:12:00.988663  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 19:12:00.988682  107326 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
I0911 19:12:00.988694  107326 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I0911 19:12:00.990832  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:12:00.990882  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:12:00.992417  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:12:00.992454  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:12:00.995761  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:12:00.995982  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:12:01.082749  107326 secure_serving.go:123] Serving securely on 127.0.0.1:42803
I0911 19:12:01.210295  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:12:01.210411  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:12:02.804651  107326 serving.go:312] Generated self-signed cert (/tmp/test-integration-aggregator476862855/apiserver.crt, /tmp/test-integration-aggregator476862855/apiserver.key)
I0911 19:12:03.805356  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:12:03.805482  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0911 19:12:03.880725  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:12:03.881163  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:12:03.881462  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 19:12:03.881586  107326 plugins.go:158] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook.
I0911 19:12:03.881668  107326 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
W0911 19:12:03.881767  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:12:03.884267  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 19:12:03.884735  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:12:03.884884  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:12:03.887002  107326 client.go:361] parsed scheme: "endpoint"
I0911 19:12:03.888419  107326 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0911 19:12:03.891692  107326 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 19:12:03.895825  107326 secure_serving.go:123] Serving securely on 127.0.0.1:37191
I0911 19:12:03.895885  107326 available_controller.go:383] Starting AvailableConditionController
I0911 19:12:03.895917  107326 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0911 19:12:03.896673  107326 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0911 19:12:03.896805  107326 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0911 19:12:03.996143  107326 cache.go:39] Caches are synced for AvailableConditionController controller
I0911 19:12:03.997945  107326 cache.go:39] Caches are synced for APIServiceRegistrationController controller
--- FAIL: TestAggregatedAPIServer (12.50s)
    apiserver_test.go:222: open /tmp/test-integration-wardle-server941466061/apiserver.crt: no such file or directory
    apiserver_test.go:222: open /tmp/test-integration-wardle-server941466061/apiserver.crt: no such file or directory
    apiserver_test.go:222: open /tmp/test-integration-wardle-server941466061/apiserver.crt: no such file or directory
    apiserver_test.go:222: open /tmp/test-integration-wardle-server941466061/apiserver.crt: no such file or directory
    apiserver_test.go:222: open /tmp/test-integration-wardle-server941466061/apiserver.crt: no such file or directory
    apiserver_test.go:222: open /tmp/test-integration-wardle-server941466061/apiserver.crt: no such file or directory
    apiserver_test.go:222: open /tmp/test-integration-wardle-server941466061/apiserver.crt: no such file or directory
    apiserver_test.go:453: {"kind":"APIGroupList","groups":[{"name":"wardle.k8s.io","versions":[{"groupVersion":"wardle.k8s.io/v1beta1","version":"v1beta1"},{"groupVersion":"wardle.k8s.io/v1alpha1","version":"v1alpha1"}],"preferredVersion":{"groupVersion":"wardle.k8s.io/v1beta1","version":"v1beta1"},"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":":42803"}]}]}
        
    apiserver_test.go:482: {"kind":"APIGroup","apiVersion":"v1","name":"wardle.k8s.io","versions":[{"groupVersion":"wardle.k8s.io/v1beta1","version":"v1beta1"},{"groupVersion":"wardle.k8s.io/v1alpha1","version":"v1alpha1"}],"preferredVersion":{"groupVersion":"wardle.k8s.io/v1beta1","version":"v1beta1"}}
        
    apiserver_test.go:500: {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"wardle.k8s.io/v1alpha1","resources":[{"name":"fischers","singularName":"","namespaced":false,"kind":"Fischer","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"u0hTAhBTXHw="},{"name":"flunders","singularName":"","namespaced":true,"kind":"Flunder","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"k36Bkt6yJrQ="}]}
        
    apiserver_test.go:382: Discovery call expected to return failed unavailable service
    apiserver_test.go:374: Discovery call didn't return expected error: <nil>
I0911 19:12:04.159848  107326 secure_serving.go:167] Stopped listening on 127.0.0.1:37191
I0911 19:12:04.160099  107326 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
I0911 19:12:04.160123  107326 available_controller.go:395] Shutting down AvailableConditionController
I0911 19:12:04.160523  107326 secure_serving.go:167] Stopped listening on 127.0.0.1:42803

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190911-190759.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeProvision 28s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeProvision$
=== RUN   TestVolumeProvision
W0911 19:18:47.477852  111245 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0911 19:18:47.477883  111245 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
W0911 19:18:47.478759  111245 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0911 19:18:47.478798  111245 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0911 19:18:47.478812  111245 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0911 19:18:47.478828  111245 master.go:259] Using reconciler: 
I0911 19:18:47.481191  111245 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.481435  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.481467  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.482208  111245 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0911 19:18:47.482247  111245 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0911 19:18:47.482240  111245 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.482694  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.482722  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.483306  111245 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 19:18:47.483354  111245 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 19:18:47.483346  111245 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.483512  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.483537  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.484304  111245 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0911 19:18:47.484309  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.484340  111245 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.484405  111245 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0911 19:18:47.484551  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.484577  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.485511  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.485639  111245 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0911 19:18:47.485705  111245 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0911 19:18:47.485807  111245 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.485976  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.485992  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.486014  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.486515  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.486988  111245 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0911 19:18:47.487020  111245 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0911 19:18:47.487240  111245 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.487407  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.487429  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.487879  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.488133  111245 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0911 19:18:47.488233  111245 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0911 19:18:47.488281  111245 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.488441  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.488474  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.488919  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.489518  111245 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0911 19:18:47.489661  111245 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.489784  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.489808  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.489866  111245 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0911 19:18:47.490774  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.491291  111245 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0911 19:18:47.491392  111245 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0911 19:18:47.491484  111245 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.491682  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.491724  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.493107  111245 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0911 19:18:47.493154  111245 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0911 19:18:47.493253  111245 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.493401  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.493422  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.493969  111245 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0911 19:18:47.494036  111245 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0911 19:18:47.494103  111245 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.494210  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.494228  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.494779  111245 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0911 19:18:47.494887  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.495042  111245 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0911 19:18:47.495113  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.495193  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.495217  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.495781  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.495800  111245 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0911 19:18:47.495934  111245 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.496002  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.496063  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.496083  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.496139  111245 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0911 19:18:47.496932  111245 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0911 19:18:47.496970  111245 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0911 19:18:47.497024  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.497079  111245 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.497233  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.497254  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.497598  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.498067  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.498199  111245 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0911 19:18:47.498260  111245 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0911 19:18:47.498281  111245 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.498430  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.498452  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.498936  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.499310  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.499333  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.500116  111245 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.500224  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.500285  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.501008  111245 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0911 19:18:47.501163  111245 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0911 19:18:47.501120  111245 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0911 19:18:47.502009  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.502714  111245 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.503007  111245 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.503822  111245 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.504348  111245 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.504810  111245 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.505449  111245 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.505859  111245 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.505953  111245 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.506159  111245 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.506534  111245 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.506940  111245 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.507065  111245 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.507699  111245 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.507883  111245 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.508272  111245 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.508457  111245 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.509036  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.509268  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.509425  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.509537  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.509694  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.509778  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.509875  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.510303  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.510504  111245 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.511040  111245 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.511586  111245 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.511757  111245 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.511962  111245 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.512506  111245 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.512815  111245 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.513268  111245 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.513776  111245 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.514308  111245 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.514955  111245 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.515212  111245 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.515337  111245 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0911 19:18:47.515378  111245 master.go:461] Enabling API group "authentication.k8s.io".
I0911 19:18:47.515403  111245 master.go:461] Enabling API group "authorization.k8s.io".
I0911 19:18:47.515570  111245 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.515763  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.515798  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.516527  111245 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 19:18:47.516650  111245 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.516744  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.516783  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.516848  111245 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 19:18:47.517712  111245 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 19:18:47.517814  111245 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 19:18:47.517858  111245 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.518007  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.518031  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.518209  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.519111  111245 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 19:18:47.519152  111245 master.go:461] Enabling API group "autoscaling".
I0911 19:18:47.519233  111245 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 19:18:47.519280  111245 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.519401  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.519429  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.520022  111245 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0911 19:18:47.520049  111245 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0911 19:18:47.520133  111245 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.520236  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.520253  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.520460  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.520843  111245 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0911 19:18:47.520865  111245 master.go:461] Enabling API group "batch".
I0911 19:18:47.520874  111245 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0911 19:18:47.520977  111245 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.521067  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.521097  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.521119  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.521639  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.521798  111245 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0911 19:18:47.521838  111245 master.go:461] Enabling API group "certificates.k8s.io".
I0911 19:18:47.521850  111245 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0911 19:18:47.521999  111245 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.522157  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.522214  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.522546  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.522582  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.523042  111245 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 19:18:47.523082  111245 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 19:18:47.523190  111245 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.523327  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.523395  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.524167  111245 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 19:18:47.524187  111245 master.go:461] Enabling API group "coordination.k8s.io".
I0911 19:18:47.524197  111245 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0911 19:18:47.524220  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.524287  111245 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 19:18:47.524302  111245 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.524428  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.524449  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.525026  111245 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 19:18:47.525053  111245 master.go:461] Enabling API group "extensions".
I0911 19:18:47.525127  111245 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 19:18:47.525186  111245 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.525423  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.525441  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.525458  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.525843  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.526157  111245 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0911 19:18:47.526242  111245 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0911 19:18:47.526405  111245 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.526664  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.526712  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.527031  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.527817  111245 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 19:18:47.527842  111245 master.go:461] Enabling API group "networking.k8s.io".
I0911 19:18:47.527861  111245 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 19:18:47.527874  111245 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.528023  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.528407  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.529339  111245 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0911 19:18:47.529374  111245 master.go:461] Enabling API group "node.k8s.io".
I0911 19:18:47.529405  111245 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0911 19:18:47.529519  111245 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.529621  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.529636  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.529703  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.530481  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.530520  111245 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0911 19:18:47.530554  111245 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0911 19:18:47.530631  111245 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.530817  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.530835  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.531744  111245 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0911 19:18:47.531760  111245 master.go:461] Enabling API group "policy".
I0911 19:18:47.531781  111245 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0911 19:18:47.531795  111245 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.531868  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.531880  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.532285  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.532427  111245 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 19:18:47.532516  111245 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 19:18:47.532571  111245 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.532856  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.532879  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.533056  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.533834  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.534075  111245 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 19:18:47.534106  111245 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.534124  111245 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 19:18:47.534246  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.534265  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.535140  111245 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 19:18:47.535254  111245 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.535302  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.535410  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.535428  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.535481  111245 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 19:18:47.536076  111245 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 19:18:47.536130  111245 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.536179  111245 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 19:18:47.536426  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.536430  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.536472  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.537509  111245 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 19:18:47.537576  111245 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 19:18:47.537634  111245 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.537685  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.537753  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.537769  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.538401  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.538644  111245 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 19:18:47.539032  111245 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 19:18:47.539123  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.539148  111245 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.539756  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.539784  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.540623  111245 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 19:18:47.540731  111245 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.540826  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.540838  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.540907  111245 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 19:18:47.541441  111245 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 19:18:47.541469  111245 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0911 19:18:47.542646  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.542751  111245 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 19:18:47.542823  111245 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.542992  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.543017  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.543645  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.543879  111245 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 19:18:47.543918  111245 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 19:18:47.544066  111245 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.544189  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.544211  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.544643  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.544743  111245 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 19:18:47.544762  111245 master.go:461] Enabling API group "scheduling.k8s.io".
I0911 19:18:47.544826  111245 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 19:18:47.544886  111245 master.go:450] Skipping disabled API group "settings.k8s.io".
I0911 19:18:47.545037  111245 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.545151  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.545179  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.545603  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.545806  111245 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 19:18:47.545831  111245 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 19:18:47.546026  111245 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.546150  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.546175  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.547021  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.547604  111245 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 19:18:47.547652  111245 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.547707  111245 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 19:18:47.547756  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.547771  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.548613  111245 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0911 19:18:47.548655  111245 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.548691  111245 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0911 19:18:47.548700  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.549998  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.550024  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.550835  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.550997  111245 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0911 19:18:47.551104  111245 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0911 19:18:47.551166  111245 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.551282  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.551306  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.552065  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.552388  111245 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 19:18:47.552417  111245 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 19:18:47.552528  111245 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.552633  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.552654  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.553621  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.553772  111245 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 19:18:47.553797  111245 master.go:461] Enabling API group "storage.k8s.io".
I0911 19:18:47.553878  111245 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 19:18:47.554072  111245 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.554197  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.554216  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.554864  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.554967  111245 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0911 19:18:47.555019  111245 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0911 19:18:47.555471  111245 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.555590  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.555608  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.556260  111245 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0911 19:18:47.556303  111245 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0911 19:18:47.556426  111245 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.556583  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.556605  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.557204  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.557308  111245 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0911 19:18:47.557461  111245 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0911 19:18:47.557489  111245 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.557691  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.557713  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.558136  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.558742  111245 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0911 19:18:47.559014  111245 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.559110  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.559130  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.558773  111245 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0911 19:18:47.558874  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.560201  111245 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0911 19:18:47.560221  111245 master.go:461] Enabling API group "apps".
I0911 19:18:47.560248  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.560249  111245 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.560345  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.560355  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.560428  111245 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0911 19:18:47.561053  111245 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 19:18:47.561080  111245 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.561102  111245 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 19:18:47.561169  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.561180  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.562428  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.562434  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.563001  111245 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 19:18:47.563042  111245 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 19:18:47.563036  111245 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.563148  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.563164  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.563718  111245 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 19:18:47.563760  111245 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.563854  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.564346  111245 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 19:18:47.590127  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.590204  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.590958  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.591484  111245 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 19:18:47.591519  111245 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0911 19:18:47.591559  111245 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 19:18:47.591560  111245 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.591861  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:47.591878  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:47.592741  111245 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 19:18:47.592835  111245 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 19:18:47.592865  111245 master.go:461] Enabling API group "events.k8s.io".
I0911 19:18:47.593138  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.593152  111245 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.593426  111245 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.593547  111245 watch_cache.go:405] Replace watchCache (rev: 58699) 
I0911 19:18:47.593814  111245 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.593949  111245 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.594062  111245 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.594168  111245 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.594392  111245 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.594536  111245 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.594648  111245 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.594750  111245 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.595583  111245 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.595831  111245 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.596609  111245 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.596879  111245 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.597646  111245 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.597901  111245 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.598684  111245 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.598923  111245 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.599619  111245 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.599899  111245 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 19:18:47.599951  111245 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0911 19:18:47.600536  111245 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.600663  111245 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.601009  111245 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.601736  111245 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.602273  111245 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.603011  111245 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.603480  111245 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.604238  111245 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.605062  111245 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.605306  111245 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.605922  111245 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 19:18:47.605990  111245 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0911 19:18:47.606606  111245 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.606941  111245 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.607449  111245 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.607971  111245 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.608537  111245 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.609190  111245 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.609842  111245 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.610350  111245 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.610899  111245 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.611453  111245 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.612029  111245 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 19:18:47.612100  111245 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0911 19:18:47.612644  111245 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.613103  111245 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 19:18:47.613162  111245 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0911 19:18:47.613670  111245 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.614113  111245 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.614427  111245 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.614856  111245 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.615243  111245 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.615656  111245 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.616061  111245 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 19:18:47.616118  111245 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0911 19:18:47.616811  111245 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.617411  111245 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.617640  111245 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.618286  111245 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.618533  111245 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.618828  111245 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.619404  111245 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.619668  111245 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.619888  111245 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.620543  111245 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.620778  111245 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.621046  111245 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 19:18:47.621101  111245 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0911 19:18:47.621110  111245 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0911 19:18:47.621688  111245 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.622244  111245 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.622761  111245 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.623222  111245 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.623910  111245 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c7519225-3cdd-45ac-81fd-d821fbe2b7f3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 19:18:47.626783  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:47.626810  111245 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0911 19:18:47.626820  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:47.626831  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:47.626840  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:47.626848  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:47.626882  111245 httplog.go:90] GET /healthz: (203.266µs) 0 [Go-http-client/1.1 127.0.0.1:45912]
I0911 19:18:47.627958  111245 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.242757ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:47.630444  111245 httplog.go:90] GET /api/v1/services: (917.359µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:47.634022  111245 httplog.go:90] GET /api/v1/services: (798.898µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:47.635635  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:47.635657  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:47.635668  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:47.635674  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:47.635679  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:47.635696  111245 httplog.go:90] GET /healthz: (131.747µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:47.636341  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (687.754µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45912]
I0911 19:18:47.637101  111245 httplog.go:90] GET /api/v1/services: (1.074622ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:47.637323  111245 httplog.go:90] GET /api/v1/services: (694.031µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:47.638133  111245 httplog.go:90] POST /api/v1/namespaces: (1.389995ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45912]
I0911 19:18:47.639453  111245 httplog.go:90] GET /api/v1/namespaces/kube-public: (837.452µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:47.641273  111245 httplog.go:90] POST /api/v1/namespaces: (1.415798ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:47.642538  111245 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (883.892µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:47.644216  111245 httplog.go:90] POST /api/v1/namespaces: (1.155527ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:47.727794  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:47.727837  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:47.727849  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:47.727859  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:47.727867  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:47.727906  111245 httplog.go:90] GET /healthz: (284.976µs) 0 [Go-http-client/1.1 127.0.0.1:45916]
I0911 19:18:47.736512  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:47.736539  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:47.736548  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:47.736555  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:47.736567  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:47.736597  111245 httplog.go:90] GET /healthz: (280.694µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:47.827688  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:47.827720  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:47.827730  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:47.827737  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:47.827743  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:47.827766  111245 httplog.go:90] GET /healthz: (217.637µs) 0 [Go-http-client/1.1 127.0.0.1:45916]
I0911 19:18:47.836476  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:47.836513  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:47.836528  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:47.836537  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:47.836545  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:47.836578  111245 httplog.go:90] GET /healthz: (271.988µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:47.927727  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:47.927764  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:47.927775  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:47.927782  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:47.927788  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:47.927827  111245 httplog.go:90] GET /healthz: (282.789µs) 0 [Go-http-client/1.1 127.0.0.1:45916]
I0911 19:18:47.936405  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:47.936438  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:47.936447  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:47.936454  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:47.936469  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:47.936496  111245 httplog.go:90] GET /healthz: (250.571µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:48.027630  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.027671  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.027682  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.027692  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.027702  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.027736  111245 httplog.go:90] GET /healthz: (240.853µs) 0 [Go-http-client/1.1 127.0.0.1:45916]
I0911 19:18:48.036454  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.036483  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.036496  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.036506  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.036517  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.036546  111245 httplog.go:90] GET /healthz: (246.389µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:48.127683  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.127712  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.127721  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.127728  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.127733  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.127765  111245 httplog.go:90] GET /healthz: (258.904µs) 0 [Go-http-client/1.1 127.0.0.1:45916]
I0911 19:18:48.136486  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.136521  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.136533  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.136543  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.136550  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.136580  111245 httplog.go:90] GET /healthz: (260.489µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:48.227687  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.227717  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.227726  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.227733  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.227738  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.227770  111245 httplog.go:90] GET /healthz: (212.753µs) 0 [Go-http-client/1.1 127.0.0.1:45916]
I0911 19:18:48.236400  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.236443  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.236455  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.236466  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.236474  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.236513  111245 httplog.go:90] GET /healthz: (265.366µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:48.327643  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.327678  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.327687  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.327694  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.327701  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.327743  111245 httplog.go:90] GET /healthz: (233.001µs) 0 [Go-http-client/1.1 127.0.0.1:45916]
I0911 19:18:48.336471  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.336533  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.336547  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.336557  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.336565  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.336612  111245 httplog.go:90] GET /healthz: (298.26µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:48.427622  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.427655  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.427667  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.427677  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.427683  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.427723  111245 httplog.go:90] GET /healthz: (230.21µs) 0 [Go-http-client/1.1 127.0.0.1:45916]
I0911 19:18:48.436478  111245 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 19:18:48.436509  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.436521  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.436531  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.436539  111245 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.436602  111245 httplog.go:90] GET /healthz: (283.78µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:48.478549  111245 client.go:361] parsed scheme: "endpoint"
I0911 19:18:48.478633  111245 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 19:18:48.528946  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.528980  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.528990  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.528998  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.529056  111245 httplog.go:90] GET /healthz: (1.48007ms) 0 [Go-http-client/1.1 127.0.0.1:45916]
I0911 19:18:48.537435  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.537473  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.537484  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.537493  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.537532  111245 httplog.go:90] GET /healthz: (1.225486ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:48.628286  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.492415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:48.628322  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.628343  111245 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 19:18:48.628353  111245 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 19:18:48.628388  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 19:18:48.628419  111245 httplog.go:90] GET /healthz: (791.174µs) 0 [Go-http-client/1.1 127.0.0.1:45932]
I0911 19:18:48.628518  111245 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.735562ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45916]
I0911 19:18:48.628554  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.3094ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45930]
I0911 19:18:48.629820  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (858.305µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45932]
I0911 19:18:48.630622  111245 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.661242ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:48.630674  111245 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.377126ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.630832  111245 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0911 19:18:48.631457  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (782.171µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45932]
I0911 19:18:48.631804  111245 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (825.94µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.632324  111245 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.336902ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:48.632921  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.047701ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45932]
I0911 19:18:48.634042  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (737.629µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45932]
I0911 19:18:48.634554  111245 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.347345ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.634691  111245 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0911 19:18:48.634703  111245 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0911 19:18:48.635162  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (682.645µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45932]
I0911 19:18:48.636487  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (720.297µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.636823  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.636847  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:48.636879  111245 httplog.go:90] GET /healthz: (707.549µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:48.637506  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (690.368µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.638760  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (870.443µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.639716  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (643.885µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.641255  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.214626ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.641524  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0911 19:18:48.642573  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (827.705µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.644674  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.675288ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.644933  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0911 19:18:48.645844  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (727.931µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.647602  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.400263ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.647801  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0911 19:18:48.648793  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (813.572µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.650784  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.578482ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.651015  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0911 19:18:48.651867  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (618.783µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.653544  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.294713ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.653709  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0911 19:18:48.654779  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (903.207µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.656796  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.637808ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.656989  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0911 19:18:48.657891  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (719.016µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.659574  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.318199ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.659920  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0911 19:18:48.661103  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (822.545µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.663121  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.45539ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.663307  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0911 19:18:48.664475  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (967.436µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.667066  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.022467ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.667348  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0911 19:18:48.668607  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.03864ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.670941  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.8903ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.671340  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0911 19:18:48.672527  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (876.012µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.674531  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.61169ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.674722  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0911 19:18:48.675748  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (834.92µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.677874  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.642669ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.678231  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0911 19:18:48.680948  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (2.451572ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.682960  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.512759ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.683200  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0911 19:18:48.684340  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (984.927µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.686003  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.202119ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.686172  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0911 19:18:48.686978  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (664.54µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.689430  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.140474ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.689630  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0911 19:18:48.691486  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (857.78µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.693078  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.266866ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.693387  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0911 19:18:48.694315  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (712.694µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.695854  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.098537ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.696146  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0911 19:18:48.697148  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (662.698µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.698914  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.416604ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.699094  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 19:18:48.699843  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (640.772µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.701344  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.284248ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.701556  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0911 19:18:48.702466  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (740.067µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.704055  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.178951ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.704298  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0911 19:18:48.705319  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (690.054µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.707212  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.404916ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.707522  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0911 19:18:48.708488  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (720.307µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.710149  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.179598ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.710479  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0911 19:18:48.711509  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (827.541µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.713264  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.30441ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.713598  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0911 19:18:48.714458  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (681.37µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.716134  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.332334ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.716281  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0911 19:18:48.717050  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (544.022µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.718350  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (989.299µs) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.718556  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0911 19:18:48.719503  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (801.682µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.721275  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.264835ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.721545  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0911 19:18:48.722400  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (679.3µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.724224  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.293255ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.724452  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0911 19:18:48.725342  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (696.484µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.726830  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.169805ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.727036  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 19:18:48.727908  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (628.751µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.728334  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.728386  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:48.728432  111245 httplog.go:90] GET /healthz: (1.037115ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:48.729234  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.106835ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.729434  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 19:18:48.730497  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (770.517µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.732097  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.277196ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.732313  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 19:18:48.733150  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (636.335µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.734612  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.138425ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.734821  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 19:18:48.735692  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (684.042µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.736950  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.736978  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:48.737003  111245 httplog.go:90] GET /healthz: (802.427µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:48.737109  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.129887ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.737352  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 19:18:48.738343  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (781.803µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.740084  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.262775ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.740387  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 19:18:48.741181  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (632.38µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.742670  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.057914ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.742844  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 19:18:48.743640  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (649.388µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.745212  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.23959ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.745534  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 19:18:48.746626  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (771.999µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.748296  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.307693ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.748583  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 19:18:48.749438  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (638.412µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.751077  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.266304ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.751304  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 19:18:48.752098  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (601.181µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.753553  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.189523ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.753711  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0911 19:18:48.754589  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (721.218µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.756476  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.440527ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.756670  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 19:18:48.757703  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (855.504µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.759471  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.433714ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.759778  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0911 19:18:48.760741  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (695.12µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.762624  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.449148ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.762940  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 19:18:48.763946  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (777.196µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.765491  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.166929ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.765772  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 19:18:48.766687  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (708.515µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.768295  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.168888ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.768519  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 19:18:48.769469  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (768.325µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.771101  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.167994ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.771294  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 19:18:48.772203  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (674.459µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.773748  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.135234ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.773921  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 19:18:48.774747  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (662.748µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.776496  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.141549ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.776687  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0911 19:18:48.777544  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (706.626µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.779597  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.719959ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.779911  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 19:18:48.780759  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (665.185µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.782744  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.653388ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.782928  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0911 19:18:48.783724  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (649.715µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.785137  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.14044ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.785273  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 19:18:48.787496  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (718.6µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.808975  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.041717ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.809239  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 19:18:48.828270  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.828308  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:48.828341  111245 httplog.go:90] GET /healthz: (1.006976ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:48.828543  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.625671ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.837431  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.837561  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:48.837780  111245 httplog.go:90] GET /healthz: (1.424581ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.849269  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.207877ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.849636  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 19:18:48.868895  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.324924ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.888827  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.900019ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.889048  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 19:18:48.908322  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.409145ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.928801  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.929025  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:48.929233  111245 httplog.go:90] GET /healthz: (1.48694ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:48.928903  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.966982ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.929654  111245 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 19:18:48.937024  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:48.937055  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:48.937088  111245 httplog.go:90] GET /healthz: (852.203µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.947991  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.104285ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.969343  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.398018ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:48.969635  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0911 19:18:48.988275  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.296938ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.008899  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.024123ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.009168  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0911 19:18:49.028158  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.219666ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.028453  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.028554  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.028718  111245 httplog.go:90] GET /healthz: (1.345835ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:49.037112  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.037273  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.037498  111245 httplog.go:90] GET /healthz: (1.262792ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.048961  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.963099ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.049413  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0911 19:18:49.068510  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.484067ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.089303  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.394202ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.089585  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0911 19:18:49.108206  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.267676ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.129124  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.129579  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.129875  111245 httplog.go:90] GET /healthz: (2.439329ms) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:49.129508  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.500374ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.130435  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0911 19:18:49.137008  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.137033  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.137064  111245 httplog.go:90] GET /healthz: (776.618µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.148035  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.135951ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.169057  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.085848ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.169292  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 19:18:49.188117  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.196191ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.209041  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.06646ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.209236  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0911 19:18:49.228092  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.206416ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.228705  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.228806  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.228945  111245 httplog.go:90] GET /healthz: (1.591597ms) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:49.237084  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.237200  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.237335  111245 httplog.go:90] GET /healthz: (990.906µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.248742  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.856782ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.248973  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0911 19:18:49.268220  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.325231ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.289006  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.042656ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.289428  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0911 19:18:49.308490  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.492251ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.328432  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.328462  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.328508  111245 httplog.go:90] GET /healthz: (1.096905ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:49.329847  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.760143ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.330052  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0911 19:18:49.337239  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.337270  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.337305  111245 httplog.go:90] GET /healthz: (950.595µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.347871  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.008035ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.369073  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.065211ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.369422  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 19:18:49.388330  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.327666ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.409312  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.240425ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.409650  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 19:18:49.428309  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.428344  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.428392  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.473035ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.428411  111245 httplog.go:90] GET /healthz: (998.399µs) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:49.437090  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.437129  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.437159  111245 httplog.go:90] GET /healthz: (905.92µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.449045  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.150735ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.449293  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 19:18:49.468154  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.228166ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.489234  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.302603ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.489604  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 19:18:49.508099  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.184589ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.528495  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.528525  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.528554  111245 httplog.go:90] GET /healthz: (1.138554ms) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:49.529240  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.325411ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.529523  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 19:18:49.537345  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.537400  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.537497  111245 httplog.go:90] GET /healthz: (1.234163ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.548163  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.252077ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.568899  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.034403ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.569123  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 19:18:49.588271  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.35666ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.609326  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.329255ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.609612  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 19:18:49.628231  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.335463ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.628231  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.628592  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.628687  111245 httplog.go:90] GET /healthz: (1.271515ms) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:49.637441  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.637474  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.637506  111245 httplog.go:90] GET /healthz: (1.212072ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.648809  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.870717ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.649298  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 19:18:49.668206  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.283552ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.689304  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.328225ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.690147  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 19:18:49.708429  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.429292ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.728910  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.728946  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.728981  111245 httplog.go:90] GET /healthz: (1.48904ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:49.729270  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.340544ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.729572  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 19:18:49.737290  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.737316  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.737389  111245 httplog.go:90] GET /healthz: (1.025266ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.748014  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.127792ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.769772  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.769488ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.770352  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0911 19:18:49.788210  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.279479ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.809149  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.184914ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.809640  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 19:18:49.828416  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.51573ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:49.828441  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.828875  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.829126  111245 httplog.go:90] GET /healthz: (1.632167ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:49.837244  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.837278  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.837323  111245 httplog.go:90] GET /healthz: (1.034202ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.849087  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.152369ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.849373  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0911 19:18:49.869687  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (2.73052ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.889101  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.108472ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.889575  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 19:18:49.908411  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.473643ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.928487  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.928521  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.928558  111245 httplog.go:90] GET /healthz: (1.126463ms) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:49.928910  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.968393ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.929130  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 19:18:49.937185  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:49.937214  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:49.937304  111245 httplog.go:90] GET /healthz: (1.033894ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.948000  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.121244ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.969508  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.270425ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:49.969935  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 19:18:49.988213  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.290035ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.009095  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.130869ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.009355  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 19:18:50.028289  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.028323  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.028373  111245 httplog.go:90] GET /healthz: (930.573µs) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:50.028645  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.730506ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.037010  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.037161  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.037330  111245 httplog.go:90] GET /healthz: (1.060504ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.048949  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.994864ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.049413  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 19:18:50.068522  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.560235ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.089108  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.183577ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.089409  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0911 19:18:50.108427  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.384392ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.128882  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.128913  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.128971  111245 httplog.go:90] GET /healthz: (1.324737ms) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:50.129630  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.672333ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.129926  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 19:18:50.137194  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.137224  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.137263  111245 httplog.go:90] GET /healthz: (930.382µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.147988  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.103708ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.169401  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.412397ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.169814  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0911 19:18:50.188261  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.326817ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.211027  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.196456ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.211306  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 19:18:50.228295  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.228328  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.228384  111245 httplog.go:90] GET /healthz: (886.896µs) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:50.228522  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.573889ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.237131  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.237158  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.237242  111245 httplog.go:90] GET /healthz: (901.82µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.248711  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.817789ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.249028  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 19:18:50.268376  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.429024ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.289185  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.178091ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.289558  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 19:18:50.308286  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.329422ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.328453  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.328489  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.328554  111245 httplog.go:90] GET /healthz: (1.185679ms) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:50.329078  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119767ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.329424  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 19:18:50.337229  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.337257  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.337293  111245 httplog.go:90] GET /healthz: (1.020985ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.348030  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.140363ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.369196  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.162568ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.369662  111245 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 19:18:50.388324  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.362816ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.390186  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.372941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.408840  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.898354ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.409105  111245 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0911 19:18:50.428305  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.319646ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.428547  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.428579  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.428647  111245 httplog.go:90] GET /healthz: (1.18347ms) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:50.429945  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.262717ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.437082  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.437112  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.437179  111245 httplog.go:90] GET /healthz: (953.977µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.448648  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.787654ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.448862  111245 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 19:18:50.468615  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.587186ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.470753  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.660491ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.488818  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.912708ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.489119  111245 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 19:18:50.508208  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.314574ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.509990  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.306863ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.529177  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.529208  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.282349ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.529212  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.529278  111245 httplog.go:90] GET /healthz: (1.845788ms) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:50.529482  111245 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 19:18:50.537270  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.537299  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.537334  111245 httplog.go:90] GET /healthz: (1.020097ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.547817  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.073628ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.549518  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.187661ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.568929  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.988153ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.569157  111245 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 19:18:50.588136  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.26403ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.589740  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.209355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.610029  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.100704ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.610284  111245 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 19:18:50.628822  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.629158  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.629380  111245 httplog.go:90] GET /healthz: (1.422048ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:50.628879  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.005252ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.631435  111245 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.284293ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.637168  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.637341  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.637495  111245 httplog.go:90] GET /healthz: (1.288929ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.649432  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.520851ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.649770  111245 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 19:18:50.668295  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.30417ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.670419  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.658096ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.689380  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.402485ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.689641  111245 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0911 19:18:50.708209  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.275983ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.709949  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.229976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.728346  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.728402  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.728443  111245 httplog.go:90] GET /healthz: (1.07863ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:50.728805  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.909093ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.729104  111245 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 19:18:50.737167  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.737326  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.737502  111245 httplog.go:90] GET /healthz: (1.214584ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.748249  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.379331ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.750120  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.258301ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.768875  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.941342ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.769322  111245 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 19:18:50.788291  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.366453ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.790044  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.283856ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.809038  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.135546ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.809571  111245 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 19:18:50.828268  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.348932ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:50.828750  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.828777  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.828824  111245 httplog.go:90] GET /healthz: (1.383865ms) 0 [Go-http-client/1.1 127.0.0.1:45914]
I0911 19:18:50.830645  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.209447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.837144  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.837255  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.837305  111245 httplog.go:90] GET /healthz: (1.100562ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.849958  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.053762ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.850287  111245 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 19:18:50.868124  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.155537ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.869926  111245 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.207416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.889076  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.134637ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.889303  111245 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 19:18:50.907785  111245 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (932.115µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.909223  111245 httplog.go:90] GET /api/v1/namespaces/kube-public: (984.094µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.928350  111245 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 19:18:50.928392  111245 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 19:18:50.928429  111245 httplog.go:90] GET /healthz: (979.421µs) 0 [Go-http-client/1.1 127.0.0.1:45934]
I0911 19:18:50.928831  111245 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.884059ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.929073  111245 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 19:18:50.937078  111245 httplog.go:90] GET /healthz: (851.043µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.938423  111245 httplog.go:90] GET /api/v1/namespaces/default: (1.015072ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.940311  111245 httplog.go:90] POST /api/v1/namespaces: (1.548491ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.941828  111245 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.132441ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.945339  111245 httplog.go:90] POST /api/v1/namespaces/default/services: (3.133812ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.946793  111245 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (932.842µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:50.948588  111245 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.42261ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:51.028620  111245 httplog.go:90] GET /healthz: (1.075154ms) 200 [Go-http-client/1.1 127.0.0.1:45914]
W0911 19:18:51.029428  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029461  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029497  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029510  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029522  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029531  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029618  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029652  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029688  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029768  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:51.029808  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 19:18:51.029855  111245 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0911 19:18:51.029901  111245 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0911 19:18:51.030458  111245 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030482  111245 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030552  111245 reflector.go:120] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030563  111245 reflector.go:120] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030578  111245 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030603  111245 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030620  111245 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030650  111245 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030667  111245 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030722  111245 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030732  111245 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030571  111245 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030965  111245 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.030976  111245 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.031048  111245 reflector.go:120] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.031061  111245 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.031114  111245 reflector.go:120] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.031128  111245 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.031966  111245 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (327.129µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45954]
I0911 19:18:51.031988  111245 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (579.807µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:18:51.032038  111245 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (330.894µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45956]
I0911 19:18:51.032098  111245 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (605.19µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:18:51.032136  111245 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (469.409µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45960]
I0911 19:18:51.032408  111245 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (325.161µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45950]
I0911 19:18:51.032496  111245 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (954.554µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45946]
I0911 19:18:51.032791  111245 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (650.405µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45958]
I0911 19:18:51.032825  111245 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (308.191µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45948]
I0911 19:18:51.032857  111245 get.go:250] Starting watch for /api/v1/pods, rv=58699 labels= fields= timeout=8m33s
I0911 19:18:51.033063  111245 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=58699 labels= fields= timeout=6m5s
I0911 19:18:51.033142  111245 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=58699 labels= fields= timeout=5m40s
I0911 19:18:51.033287  111245 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=58699 labels= fields= timeout=9m2s
I0911 19:18:51.033435  111245 get.go:250] Starting watch for /api/v1/nodes, rv=58699 labels= fields= timeout=6m48s
I0911 19:18:51.033144  111245 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=58699 labels= fields= timeout=8m25s
I0911 19:18:51.033547  111245 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=58699 labels= fields= timeout=9m14s
I0911 19:18:51.033555  111245 reflector.go:120] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.033572  111245 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.033696  111245 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=58699 labels= fields= timeout=5m58s
I0911 19:18:51.033765  111245 get.go:250] Starting watch for /api/v1/services, rv=58938 labels= fields= timeout=7m57s
I0911 19:18:51.034046  111245 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.034131  111245 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0911 19:18:51.034415  111245 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (383.977µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45962]
I0911 19:18:51.034998  111245 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=58699 labels= fields= timeout=7m0s
I0911 19:18:51.035090  111245 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (486.506µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45964]
I0911 19:18:51.035806  111245 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=58699 labels= fields= timeout=8m54s
I0911 19:18:51.130416  111245 shared_informer.go:227] caches populated
I0911 19:18:51.230687  111245 shared_informer.go:227] caches populated
I0911 19:18:51.330936  111245 shared_informer.go:227] caches populated
I0911 19:18:51.431175  111245 shared_informer.go:227] caches populated
I0911 19:18:51.531460  111245 shared_informer.go:227] caches populated
I0911 19:18:51.631703  111245 shared_informer.go:227] caches populated
I0911 19:18:51.731916  111245 shared_informer.go:227] caches populated
I0911 19:18:51.832113  111245 shared_informer.go:227] caches populated
I0911 19:18:51.932340  111245 shared_informer.go:227] caches populated
I0911 19:18:52.032603  111245 shared_informer.go:227] caches populated
I0911 19:18:52.132794  111245 shared_informer.go:227] caches populated
I0911 19:18:52.233109  111245 shared_informer.go:227] caches populated
I0911 19:18:52.233332  111245 plugins.go:630] Loaded volume plugin "kubernetes.io/mock-provisioner"
W0911 19:18:52.233680  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:52.233744  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:52.233770  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:52.233785  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 19:18:52.233797  111245 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 19:18:52.233838  111245 pv_controller_base.go:282] Starting persistent volume controller
I0911 19:18:52.233875  111245 shared_informer.go:197] Waiting for caches to sync for persistent volume
I0911 19:18:52.234064  111245 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.234087  111245 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.234102  111245 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.234112  111245 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.234339  111245 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.234380  111245 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.234390  111245 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.234404  111245 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.234663  111245 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.234676  111245 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0911 19:18:52.235563  111245 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (475.185µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45976]
I0911 19:18:52.235567  111245 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (553.197µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45968]
I0911 19:18:52.235585  111245 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (539.956µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45972]
I0911 19:18:52.235585  111245 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (561.261µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45970]
I0911 19:18:52.235638  111245 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (546.591µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45974]
I0911 19:18:52.236294  111245 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=58699 labels= fields= timeout=9m9s
I0911 19:18:52.236338  111245 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=58699 labels= fields= timeout=7m25s
I0911 19:18:52.236341  111245 get.go:250] Starting watch for /api/v1/pods, rv=58699 labels= fields= timeout=7m19s
I0911 19:18:52.236478  111245 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=58699 labels= fields= timeout=9m40s
I0911 19:18:52.236496  111245 get.go:250] Starting watch for /api/v1/nodes, rv=58699 labels= fields= timeout=5m30s
I0911 19:18:52.334041  111245 shared_informer.go:227] caches populated
I0911 19:18:52.334070  111245 shared_informer.go:204] Caches are synced for persistent volume 
I0911 19:18:52.334081  111245 shared_informer.go:227] caches populated
I0911 19:18:52.334088  111245 pv_controller_base.go:158] controller initialized
I0911 19:18:52.334187  111245 pv_controller_base.go:419] resyncing PV controller
I0911 19:18:52.434288  111245 shared_informer.go:227] caches populated
I0911 19:18:52.534452  111245 shared_informer.go:227] caches populated
I0911 19:18:52.634632  111245 shared_informer.go:227] caches populated
I0911 19:18:52.734871  111245 shared_informer.go:227] caches populated
I0911 19:18:52.737854  111245 httplog.go:90] POST /api/v1/nodes: (2.285882ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.738706  111245 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I0911 19:18:52.739826  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.49341ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.741642  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.368297ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.741844  111245 volume_binding_test.go:751] Running test one immediate pv prebound, one wait provisioned
I0911 19:18:52.743405  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.365649ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.744966  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.238142ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.746564  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.202492ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.748514  111245 httplog.go:90] POST /api/v1/persistentvolumes: (1.55785ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.748895  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-i-prebound", version 58950
I0911 19:18:52.748950  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 19:18:52.748958  111245 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound
I0911 19:18:52.748964  111245 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0911 19:18:52.753280  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (4.331367ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.753616  111245 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound", version 58951
I0911 19:18:52.753645  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:18:52.753694  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 19:18:52.753702  111245 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:18:52.753711  111245 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:18:52.753728  111245 pv_controller.go:849] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0911 19:18:52.753739  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.666808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:18:52.753886  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58952
I0911 19:18:52.753901  111245 pv_controller.go:798] volume "pv-i-prebound" entered phase "Available"
I0911 19:18:52.753920  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58952
I0911 19:18:52.753945  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 19:18:52.753951  111245 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound
I0911 19:18:52.753955  111245 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0911 19:18:52.753963  111245 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0911 19:18:52.754943  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (1.031252ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:18:52.754981  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (1.288764ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.755240  111245 pv_controller.go:852] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 19:18:52.755268  111245 pv_controller.go:934] error binding volume "pv-i-prebound" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 19:18:52.755282  111245 pv_controller_base.go:246] could not sync claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 19:18:52.755309  111245 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision", version 58953
I0911 19:18:52.755323  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:18:52.755343  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:18:52.755402  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Pending
I0911 19:18:52.755421  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: phase Pending already set
I0911 19:18:52.755457  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-canprovision", UID:"0b1a7323-aa36-46cd-95f4-87c021e4aeb9", APIVersion:"v1", ResourceVersion:"58953", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 19:18:52.757279  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.590959ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:18:52.757996  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (2.499348ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.758579  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned
I0911 19:18:52.758697  111245 scheduler.go:530] Attempting to schedule pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned
E0911 19:18:52.758961  111245 factory.go:557] Error scheduling volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0911 19:18:52.759075  111245 factory.go:615] Updating pod condition for volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
I0911 19:18:52.760637  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.269882ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:18:52.760886  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.60612ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.761223  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned/status: (1.630464ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45990]
E0911 19:18:52.761511  111245 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0911 19:18:52.860399  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.64511ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:52.960744  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.867419ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.060876  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (2.093899ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.160668  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.84371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.260059  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.329143ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.360567  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.731092ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.460663  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.835835ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.560718  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.852783ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.660571  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.794007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.760354  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.534255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.860547  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.716426ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:53.960532  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.741566ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.060531  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.736026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.160676  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.866669ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.260456  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.708934ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.360305  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.484287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.460606  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.725072ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.560674  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.84104ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.660626  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.774916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.760627  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.772021ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.860784  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.890126ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:54.960499  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.607514ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.060550  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.742589ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.160552  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.767365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.260407  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.577366ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.360588  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.688436ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.460608  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.7672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.560623  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.793901ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.660764  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.967744ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.760541  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.704808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.860512  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.706301ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:55.960524  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.704315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.060454  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.571292ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.160638  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.780536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.260683  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.694059ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.360696  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.80363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.460628  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.764159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.560723  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.785848ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.660647  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.774912ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.760462  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.638132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.860432  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.630727ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:56.960716  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.836729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.060646  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.805683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.160542  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.741394ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.260640  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.805892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.360647  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.819922ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.460518  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.688315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.560700  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.859197ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.660495  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.651081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.760679  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.695434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.860582  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.75203ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:57.960664  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.786432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.060807  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.978362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.160499  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.724054ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.260433  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.653787ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.360442  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.606612ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.460669  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.85443ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.560554  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.69919ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.660627  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.808058ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.760790  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (2.014114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.860607  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.715211ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:58.960629  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.849353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.060655  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.847269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.160495  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.720917ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.260504  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.676553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.360323  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.507666ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.460341  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.621514ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.560412  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.664377ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.660354  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.483014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.760550  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.780791ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.860266  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.476311ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:18:59.960544  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.671164ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.060257  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.49737ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.160460  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.672962ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.260575  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.698248ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.360460  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.577091ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.460702  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.820031ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.560491  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.760349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.660464  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.546898ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.760687  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.713182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.860599  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.803683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.939163  111245 httplog.go:90] GET /api/v1/namespaces/default: (1.443361ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.940901  111245 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.341299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.942393  111245 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.172696ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:00.960735  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.852459ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.060622  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.794958ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.160744  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.885453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.260612  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.774316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.360473  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.730222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.460567  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.736766ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.560497  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.744443ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.660272  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.482125ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.760618  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.798901ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.860737  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.859101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:01.960445  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.61416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.060433  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.691053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.160734  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.86695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.260430  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.615898ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.360660  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.84946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.463137  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.934232ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.560685  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.789729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.660778  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.910035ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.760662  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.838882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.860800  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.941264ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:02.960437  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.588299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.060398  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.631438ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.160680  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.885524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.260628  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.846574ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.360646  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.818891ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.460530  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.71784ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.560826  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (2.004626ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.660542  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.716355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.760754  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.879502ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.860544  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.733656ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:03.960609  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.674888ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.060156  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.418126ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.160743  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.858237ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.260678  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.86534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.360804  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.762351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.460574  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.741743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.560521  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.730754ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.660600  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.717375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.760531  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.676381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.860477  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.72315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:04.960494  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.719599ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.060500  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.721048ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.160539  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.713825ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.260606  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.740428ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.360504  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.658609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.460488  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.616976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.560478  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.693886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.660709  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.941145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.760648  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.786629ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.860499  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.661299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:05.960439  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.656658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.060559  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.759752ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.160717  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.843502ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.260343  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.566244ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.360533  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.691698ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.460494  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.715829ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.560711  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.777961ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.660559  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.752573ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.760813  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.948457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.860588  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.778616ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:06.960731  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.882826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:07.060750  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.852068ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:07.160700  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.781164ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:07.260157  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.289871ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:07.334414  111245 pv_controller_base.go:419] resyncing PV controller
I0911 19:19:07.334515  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58952
I0911 19:19:07.334553  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 19:19:07.334560  111245 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound
I0911 19:19:07.334566  111245 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0911 19:19:07.334572  111245 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0911 19:19:07.334591  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" with version 58951
I0911 19:19:07.334603  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:07.334630  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 19:19:07.334710  111245 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:19:07.334740  111245 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:19:07.334783  111245 pv_controller.go:849] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0911 19:19:07.337255  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.118266ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:07.337606  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59023
I0911 19:19:07.337679  111245 pv_controller.go:862] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:19:07.337715  111245 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0911 19:19:07.337715  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned
I0911 19:19:07.337955  111245 scheduler.go:530] Attempting to schedule pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned
E0911 19:19:07.338213  111245 factory.go:557] Error scheduling volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0911 19:19:07.338309  111245 factory.go:615] Updating pod condition for volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
E0911 19:19:07.338401  111245 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0911 19:19:07.338816  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59023
I0911 19:19:07.338859  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: 4f62efaa-5a04-49a7-b90e-cc3f473897c2)", boundByController: false
I0911 19:19:07.338873  111245 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound
I0911 19:19:07.338892  111245 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:07.338914  111245 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0911 19:19:07.339670  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.720718ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:07.339871  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59024
I0911 19:19:07.339900  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: 4f62efaa-5a04-49a7-b90e-cc3f473897c2)", boundByController: false
I0911 19:19:07.339911  111245 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound
I0911 19:19:07.339924  111245 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:07.339934  111245 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0911 19:19:07.340005  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59024
I0911 19:19:07.340032  111245 pv_controller.go:798] volume "pv-i-prebound" entered phase "Bound"
I0911 19:19:07.340044  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0911 19:19:07.340058  111245 pv_controller.go:901] volume "pv-i-prebound" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:19:07.341861  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.723912ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:07.342136  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-i-pv-prebound: (1.834165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I0911 19:19:07.342348  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" with version 59026
I0911 19:19:07.342404  111245 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I0911 19:19:07.342413  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound] status: set phase Bound
I0911 19:19:07.342906  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (2.597992ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:07.344129  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-i-pv-prebound/status: (1.485568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45986]
I0911 19:19:07.344328  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" with version 59027
I0911 19:19:07.344474  111245 pv_controller.go:742] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" entered phase "Bound"
I0911 19:19:07.344553  111245 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:19:07.344620  111245 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: 4f62efaa-5a04-49a7-b90e-cc3f473897c2)", boundByController: false
I0911 19:19:07.344677  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0911 19:19:07.344741  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 58953
I0911 19:19:07.344790  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:07.344845  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:07.344906  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Pending
I0911 19:19:07.344953  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: phase Pending already set
I0911 19:19:07.345009  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" with version 59027
I0911 19:19:07.345061  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0911 19:19:07.345009  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-canprovision", UID:"0b1a7323-aa36-46cd-95f4-87c021e4aeb9", APIVersion:"v1", ResourceVersion:"58953", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 19:19:07.345119  111245 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: 4f62efaa-5a04-49a7-b90e-cc3f473897c2)", boundByController: false
I0911 19:19:07.345210  111245 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: claim is already correctly bound
I0911 19:19:07.345257  111245 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:19:07.345295  111245 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:19:07.345340  111245 pv_controller.go:841] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:19:07.345414  111245 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0911 19:19:07.345454  111245 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I0911 19:19:07.345498  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0911 19:19:07.345547  111245 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I0911 19:19:07.345591  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound] status: set phase Bound
I0911 19:19:07.345636  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound] status: phase Bound already set
I0911 19:19:07.345683  111245 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound"
I0911 19:19:07.345733  111245 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: 4f62efaa-5a04-49a7-b90e-cc3f473897c2)", boundByController: false
I0911 19:19:07.345776  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0911 19:19:07.347353  111245 httplog.go:90] PATCH /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events/pvc-canprovision.15c378738228d803: (1.862611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:07.360522  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.677661ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:07.460648  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.832207ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:07.560563  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.748636ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:07.660458  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.700929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:07.760653  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.825723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:07.860723  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.827462ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:07.960617  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.756913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.060743  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.860588ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.160658  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.834084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.261142  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (2.019978ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.360587  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.732252ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.460647  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.755925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.560713  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.873185ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.660535  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.717666ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.760424  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.624935ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.860177  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.379309ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:08.960509  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.665485ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:09.032101  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned
I0911 19:19:09.032145  111245 scheduler.go:530] Attempting to schedule pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned
I0911 19:19:09.032422  111245 scheduler_binder.go:652] All bound volumes for Pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned" match with Node "node-1"
I0911 19:19:09.032485  111245 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned", PVC "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" on node "node-1"
I0911 19:19:09.032500  111245 scheduler_binder.go:734] Provisioning for claims of pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0911 19:19:09.032576  111245 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned", node "node-1"
I0911 19:19:09.032598  111245 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision", version 58953
I0911 19:19:09.032643  111245 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned", node "node-1"
I0911 19:19:09.035420  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (2.325711ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:09.035811  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59038
I0911 19:19:09.035849  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:09.035882  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:09.035901  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: started
I0911 19:19:09.035924  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[0b1a7323-aa36-46cd-95f4-87c021e4aeb9]]
I0911 19:19:09.036004  111245 pv_controller.go:1372] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] started, class: "wait-k4sz"
I0911 19:19:09.038642  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (2.285718ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:09.038764  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59039
I0911 19:19:09.038796  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:09.038821  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:09.038828  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: started
I0911 19:19:09.038841  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[0b1a7323-aa36-46cd-95f4-87c021e4aeb9]]
I0911 19:19:09.038847  111245 pv_controller.go:1642] operation "provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[0b1a7323-aa36-46cd-95f4-87c021e4aeb9]" is already running, skipping
I0911 19:19:09.038873  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59039
I0911 19:19:09.040212  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9: (1.128892ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:09.040509  111245 pv_controller.go:1476] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" created
I0911 19:19:09.040534  111245 pv_controller.go:1493] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: trying to save volume pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9
I0911 19:19:09.042094  111245 httplog.go:90] POST /api/v1/persistentvolumes: (1.326667ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:09.042339  111245 pv_controller.go:1501] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" saved
I0911 19:19:09.042384  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9", version 59040
I0911 19:19:09.042411  111245 pv_controller.go:1554] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" provisioned for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.042446  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-canprovision", UID:"0b1a7323-aa36-46cd-95f4-87c021e4aeb9", APIVersion:"v1", ResourceVersion:"59039", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9 using kubernetes.io/mock-provisioner
I0911 19:19:09.042504  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" with version 59040
I0911 19:19:09.042539  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 0b1a7323-aa36-46cd-95f4-87c021e4aeb9)", boundByController: true
I0911 19:19:09.042552  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:09.042566  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:09.042576  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:09.042594  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59039
I0911 19:19:09.042604  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:09.042626  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" found: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 0b1a7323-aa36-46cd-95f4-87c021e4aeb9)", boundByController: true
I0911 19:19:09.042634  111245 pv_controller.go:931] binding volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.042643  111245 pv_controller.go:829] updating PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.042655  111245 pv_controller.go:841] updating PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.042662  111245 pv_controller.go:777] updating PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: set phase Bound
I0911 19:19:09.044287  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.562318ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:09.044287  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9/status: (1.432976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.044603  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" with version 59042
I0911 19:19:09.044653  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 0b1a7323-aa36-46cd-95f4-87c021e4aeb9)", boundByController: true
I0911 19:19:09.044662  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:09.044674  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:09.044686  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:09.044689  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" with version 59042
I0911 19:19:09.044715  111245 pv_controller.go:798] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" entered phase "Bound"
I0911 19:19:09.044730  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: binding to "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9"
I0911 19:19:09.044747  111245 pv_controller.go:901] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.046817  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.839691ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.047008  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59043
I0911 19:19:09.047038  111245 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: bound to "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9"
I0911 19:19:09.047049  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Bound
I0911 19:19:09.048799  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision/status: (1.514416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.048985  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59044
I0911 19:19:09.049001  111245 pv_controller.go:742] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" entered phase "Bound"
I0911 19:19:09.049012  111245 pv_controller.go:957] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.049032  111245 pv_controller.go:958] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 0b1a7323-aa36-46cd-95f4-87c021e4aeb9)", boundByController: true
I0911 19:19:09.049043  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9", bindCompleted: true, boundByController: true
I0911 19:19:09.049073  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59044
I0911 19:19:09.049088  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Bound, bound to: "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9", bindCompleted: true, boundByController: true
I0911 19:19:09.049100  111245 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" found: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 0b1a7323-aa36-46cd-95f4-87c021e4aeb9)", boundByController: true
I0911 19:19:09.049108  111245 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: claim is already correctly bound
I0911 19:19:09.049122  111245 pv_controller.go:931] binding volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.049130  111245 pv_controller.go:829] updating PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.049141  111245 pv_controller.go:841] updating PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.049148  111245 pv_controller.go:777] updating PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: set phase Bound
I0911 19:19:09.049153  111245 pv_controller.go:780] updating PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: phase Bound already set
I0911 19:19:09.049159  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: binding to "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9"
I0911 19:19:09.049173  111245 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: already bound to "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9"
I0911 19:19:09.049179  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Bound
I0911 19:19:09.049194  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: phase Bound already set
I0911 19:19:09.049202  111245 pv_controller.go:957] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:09.049215  111245 pv_controller.go:958] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 0b1a7323-aa36-46cd-95f4-87c021e4aeb9)", boundByController: true
I0911 19:19:09.049224  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9", bindCompleted: true, boundByController: true
I0911 19:19:09.060590  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.815ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.160655  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.744808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.260439  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.570153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.360637  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.740775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.460576  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.732545ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.560503  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.67542ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.660633  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.744973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.760460  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.621496ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.860500  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.606053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:09.960356  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.563196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.032136  111245 cache.go:669] Couldn't expire cache for pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned. Binding is still in progress.
I0911 19:19:10.036021  111245 scheduler_binder.go:546] All PVCs for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned" are bound
I0911 19:19:10.036078  111245 factory.go:606] Attempting to bind pod-i-pv-prebound-w-provisioned to node-1
I0911 19:19:10.039183  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned/binding: (2.814644ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.039563  111245 scheduler.go:667] pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 19:19:10.041407  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.49498ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.060390  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-pv-prebound-w-provisioned: (1.602025ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.062299  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-i-pv-prebound: (1.333459ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.063995  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.174551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.065547  111245 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.023651ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.070286  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (4.312632ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.074186  111245 pv_controller_base.go:258] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" deleted
I0911 19:19:10.074225  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" with version 59042
I0911 19:19:10.074252  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 0b1a7323-aa36-46cd-95f4-87c021e4aeb9)", boundByController: true
I0911 19:19:10.074262  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:10.075705  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (5.06875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.075757  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.304416ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:10.075925  111245 pv_controller.go:547] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision not found
I0911 19:19:10.075944  111245 pv_controller.go:575] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" is released and reclaim policy "Delete" will be executed
I0911 19:19:10.075955  111245 pv_controller.go:777] updating PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: set phase Released
I0911 19:19:10.076012  111245 pv_controller_base.go:258] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" deleted
I0911 19:19:10.077278  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9/status: (1.12598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.077664  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" with version 59051
I0911 19:19:10.077694  111245 pv_controller.go:798] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" entered phase "Released"
I0911 19:19:10.077707  111245 pv_controller.go:1022] reclaimVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: policy is Delete
I0911 19:19:10.077728  111245 pv_controller.go:1631] scheduleOperation[delete-pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9[b6ec0591-36b6-46ec-9a6f-588818f4c973]]
I0911 19:19:10.077754  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59024
I0911 19:19:10.077777  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound (uid: 4f62efaa-5a04-49a7-b90e-cc3f473897c2)", boundByController: false
I0911 19:19:10.077788  111245 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound
I0911 19:19:10.077815  111245 pv_controller.go:547] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound not found
I0911 19:19:10.077834  111245 pv_controller.go:575] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I0911 19:19:10.077843  111245 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Released
I0911 19:19:10.077887  111245 pv_controller.go:1146] deleteVolumeOperation [pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9] started
I0911 19:19:10.079125  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9: (861.077µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.079386  111245 pv_controller.go:1250] isVolumeReleased[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: volume is released
I0911 19:19:10.079406  111245 pv_controller.go:1285] doDeleteVolume [pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]
I0911 19:19:10.079435  111245 pv_controller.go:1316] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" deleted
I0911 19:19:10.079531  111245 pv_controller.go:1193] deleteVolumeOperation [pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: success
I0911 19:19:10.080389  111245 store.go:362] GuaranteedUpdate of /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pv-i-prebound failed because of a conflict, going to retry
I0911 19:19:10.080521  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.452916ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.080658  111245 pv_controller.go:790] updating PersistentVolume[pv-i-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": StorageError: invalid object, Code: 4, Key: /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pv-i-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 75c8cb4d-5628-4602-9d27-ab8e685dc847, UID in object meta: 
I0911 19:19:10.080682  111245 pv_controller_base.go:202] could not sync volume "pv-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": StorageError: invalid object, Code: 4, Key: /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pv-i-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 75c8cb4d-5628-4602-9d27-ab8e685dc847, UID in object meta: 
I0911 19:19:10.080706  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" with version 59051
I0911 19:19:10.080727  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: phase: Released, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 0b1a7323-aa36-46cd-95f4-87c021e4aeb9)", boundByController: true
I0911 19:19:10.080740  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:10.080757  111245 pv_controller.go:547] synchronizing PersistentVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision not found
I0911 19:19:10.080763  111245 pv_controller.go:1022] reclaimVolume[pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9]: policy is Delete
I0911 19:19:10.080774  111245 pv_controller.go:1631] scheduleOperation[delete-pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9[b6ec0591-36b6-46ec-9a6f-588818f4c973]]
I0911 19:19:10.080780  111245 pv_controller.go:1642] operation "delete-pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9[b6ec0591-36b6-46ec-9a6f-588818f4c973]" is already running, skipping
I0911 19:19:10.080792  111245 pv_controller_base.go:212] volume "pv-i-prebound" deleted
I0911 19:19:10.080810  111245 pv_controller_base.go:396] deletion of claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-i-pv-prebound" was already processed
I0911 19:19:10.081896  111245 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9: (2.214755ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.082113  111245 store.go:228] deletion of /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9 failed because of a conflict, going to retry
I0911 19:19:10.082212  111245 pv_controller_base.go:212] volume "pvc-0b1a7323-aa36-46cd-95f4-87c021e4aeb9" deleted
I0911 19:19:10.082268  111245 pv_controller_base.go:396] deletion of claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" was already processed
I0911 19:19:10.082413  111245 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.393195ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45988]
I0911 19:19:10.092207  111245 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (9.36454ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.092350  111245 volume_binding_test.go:751] Running test wait one pv prebound, one provisioned
I0911 19:19:10.093564  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.000096ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.094898  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.005829ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.096274  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.055126ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.097922  111245 httplog.go:90] POST /api/v1/persistentvolumes: (1.235098ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.098980  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-prebound", version 59062
I0911 19:19:10.099018  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: )", boundByController: false
I0911 19:19:10.099026  111245 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound
I0911 19:19:10.099034  111245 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0911 19:19:10.099699  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (1.439383ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.099771  111245 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound", version 59063
I0911 19:19:10.099843  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.099882  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: )", boundByController: false
I0911 19:19:10.099927  111245 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.099963  111245 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.100010  111245 pv_controller.go:849] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0911 19:19:10.101101  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.858068ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.101269  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (1.097905ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.101299  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59064
I0911 19:19:10.101318  111245 pv_controller.go:798] volume "pv-w-prebound" entered phase "Available"
I0911 19:19:10.101344  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59064
I0911 19:19:10.101398  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: )", boundByController: false
I0911 19:19:10.101407  111245 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound
I0911 19:19:10.101413  111245 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0911 19:19:10.101421  111245 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Available already set
I0911 19:19:10.101983  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.313242ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.102197  111245 pv_controller.go:852] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 19:19:10.102221  111245 pv_controller.go:934] error binding volume "pv-w-prebound" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 19:19:10.102234  111245 pv_controller_base.go:246] could not sync claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 19:19:10.102346  111245 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision", version 59065
I0911 19:19:10.102385  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.102445  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:10.102480  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Pending
I0911 19:19:10.102524  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: phase Pending already set
I0911 19:19:10.102640  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-canprovision", UID:"5421fdf0-fedf-470b-9ed9-3100b7e7967b", APIVersion:"v1", ResourceVersion:"59065", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 19:19:10.103294  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (1.524174ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.103577  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-w-pv-prebound-w-provisioned
I0911 19:19:10.103603  111245 scheduler.go:530] Attempting to schedule pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-w-pv-prebound-w-provisioned
I0911 19:19:10.103777  111245 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-w-pv-prebound-w-provisioned", PVC "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" on node "node-1"
I0911 19:19:10.103797  111245 scheduler_binder.go:734] Provisioning for claims of pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-w-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0911 19:19:10.103846  111245 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-w-pv-prebound-w-provisioned", node "node-1"
I0911 19:19:10.103867  111245 scheduler_assume_cache.go:320] Assumed v1.PersistentVolume "pv-w-prebound", version 59064
I0911 19:19:10.103880  111245 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision", version 59065
I0911 19:19:10.103925  111245 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-w-pv-prebound-w-provisioned", node "node-1"
I0911 19:19:10.103948  111245 scheduler_binder.go:400] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0911 19:19:10.104171  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.414774ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.105620  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.445591ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46104]
I0911 19:19:10.105848  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59068
I0911 19:19:10.105885  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: d4f5099d-a87e-4422-91b7-d2f138875da7)", boundByController: false
I0911 19:19:10.105898  111245 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound
I0911 19:19:10.105928  111245 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.105929  111245 scheduler_binder.go:406] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.105948  111245 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0911 19:19:10.105973  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" with version 59063
I0911 19:19:10.106087  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.106201  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: d4f5099d-a87e-4422-91b7-d2f138875da7)", boundByController: false
I0911 19:19:10.106294  111245 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.106349  111245 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.106454  111245 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.106515  111245 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0911 19:19:10.108133  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.384863ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.108349  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59069
I0911 19:19:10.108420  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: d4f5099d-a87e-4422-91b7-d2f138875da7)", boundByController: false
I0911 19:19:10.108435  111245 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound
I0911 19:19:10.108452  111245 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.108468  111245 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0911 19:19:10.108567  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.80151ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.108580  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59069
I0911 19:19:10.108851  111245 pv_controller.go:798] volume "pv-w-prebound" entered phase "Bound"
I0911 19:19:10.108862  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0911 19:19:10.108876  111245 pv_controller.go:901] volume "pv-w-prebound" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.110341  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-w-pv-prebound: (1.280703ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.110611  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" with version 59071
I0911 19:19:10.110712  111245 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I0911 19:19:10.110787  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound] status: set phase Bound
I0911 19:19:10.112451  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-w-pv-prebound/status: (1.34106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.112739  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" with version 59072
I0911 19:19:10.112765  111245 pv_controller.go:742] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" entered phase "Bound"
I0911 19:19:10.112778  111245 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.112793  111245 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: d4f5099d-a87e-4422-91b7-d2f138875da7)", boundByController: false
I0911 19:19:10.112847  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0911 19:19:10.112916  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59070
I0911 19:19:10.112935  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.112996  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:10.113011  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: started
I0911 19:19:10.113058  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[5421fdf0-fedf-470b-9ed9-3100b7e7967b]]
I0911 19:19:10.113116  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" with version 59072
I0911 19:19:10.113133  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0911 19:19:10.113174  111245 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: d4f5099d-a87e-4422-91b7-d2f138875da7)", boundByController: false
I0911 19:19:10.113190  111245 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: claim is already correctly bound
I0911 19:19:10.113196  111245 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.113214  111245 pv_controller.go:1372] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] started, class: "wait-xs47"
I0911 19:19:10.113233  111245 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.113422  111245 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.113429  111245 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0911 19:19:10.113438  111245 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I0911 19:19:10.113498  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0911 19:19:10.113566  111245 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I0911 19:19:10.113585  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound] status: set phase Bound
I0911 19:19:10.113640  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound] status: phase Bound already set
I0911 19:19:10.113657  111245 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound"
I0911 19:19:10.113704  111245 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: d4f5099d-a87e-4422-91b7-d2f138875da7)", boundByController: false
I0911 19:19:10.113724  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0911 19:19:10.115035  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.3255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.115200  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59073
I0911 19:19:10.115229  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.115251  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:10.115260  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: started
I0911 19:19:10.115274  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[5421fdf0-fedf-470b-9ed9-3100b7e7967b]]
I0911 19:19:10.115282  111245 pv_controller.go:1642] operation "provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[5421fdf0-fedf-470b-9ed9-3100b7e7967b]" is already running, skipping
I0911 19:19:10.115330  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59073
I0911 19:19:10.116437  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b: (884.856µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.116689  111245 pv_controller.go:1476] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" created
I0911 19:19:10.116714  111245 pv_controller.go:1493] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: trying to save volume pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b
I0911 19:19:10.118186  111245 httplog.go:90] POST /api/v1/persistentvolumes: (1.264783ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.118406  111245 pv_controller.go:1501] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" saved
I0911 19:19:10.118493  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b", version 59074
I0911 19:19:10.118548  111245 pv_controller.go:1554] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" provisioned for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.118413  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b", version 59074
I0911 19:19:10.118670  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 5421fdf0-fedf-470b-9ed9-3100b7e7967b)", boundByController: true
I0911 19:19:10.118682  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:10.118695  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.118706  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:10.118764  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59073
I0911 19:19:10.118776  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.118796  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" found: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 5421fdf0-fedf-470b-9ed9-3100b7e7967b)", boundByController: true
I0911 19:19:10.118810  111245 pv_controller.go:931] binding volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.118836  111245 pv_controller.go:829] updating PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.118847  111245 pv_controller.go:841] updating PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.118853  111245 pv_controller.go:777] updating PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: set phase Bound
I0911 19:19:10.118616  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-canprovision", UID:"5421fdf0-fedf-470b-9ed9-3100b7e7967b", APIVersion:"v1", ResourceVersion:"59073", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b using kubernetes.io/mock-provisioner
I0911 19:19:10.120277  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b/status: (1.2257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:10.120635  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" with version 59075
I0911 19:19:10.120675  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 5421fdf0-fedf-470b-9ed9-3100b7e7967b)", boundByController: true
I0911 19:19:10.120689  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:10.120706  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:10.120721  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:10.120642  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.713777ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.120989  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" with version 59075
I0911 19:19:10.121016  111245 pv_controller.go:798] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" entered phase "Bound"
I0911 19:19:10.121029  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: binding to "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b"
I0911 19:19:10.121045  111245 pv_controller.go:901] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.122602  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.331159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.122898  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59077
I0911 19:19:10.122925  111245 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: bound to "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b"
I0911 19:19:10.122941  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Bound
I0911 19:19:10.125074  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision/status: (1.923069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.125274  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59078
I0911 19:19:10.125301  111245 pv_controller.go:742] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" entered phase "Bound"
I0911 19:19:10.125318  111245 pv_controller.go:957] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.125339  111245 pv_controller.go:958] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 5421fdf0-fedf-470b-9ed9-3100b7e7967b)", boundByController: true
I0911 19:19:10.125356  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b", bindCompleted: true, boundByController: true
I0911 19:19:10.125410  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59078
I0911 19:19:10.125428  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Bound, bound to: "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b", bindCompleted: true, boundByController: true
I0911 19:19:10.125445  111245 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" found: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 5421fdf0-fedf-470b-9ed9-3100b7e7967b)", boundByController: true
I0911 19:19:10.125457  111245 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: claim is already correctly bound
I0911 19:19:10.125466  111245 pv_controller.go:931] binding volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.125477  111245 pv_controller.go:829] updating PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.125494  111245 pv_controller.go:841] updating PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.125503  111245 pv_controller.go:777] updating PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: set phase Bound
I0911 19:19:10.125510  111245 pv_controller.go:780] updating PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: phase Bound already set
I0911 19:19:10.125518  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: binding to "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b"
I0911 19:19:10.125541  111245 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: already bound to "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b"
I0911 19:19:10.125551  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Bound
I0911 19:19:10.125571  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: phase Bound already set
I0911 19:19:10.125583  111245 pv_controller.go:957] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:10.125603  111245 pv_controller.go:958] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 5421fdf0-fedf-470b-9ed9-3100b7e7967b)", boundByController: true
I0911 19:19:10.125621  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b", bindCompleted: true, boundByController: true
I0911 19:19:10.206551  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (1.956187ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.305765  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (1.747274ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.405817  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (1.785413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.506578  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (2.521825ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.606436  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (2.248143ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.706669  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (2.432723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.806608  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (2.403546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.906692  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (2.490999ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.940908  111245 httplog.go:90] GET /api/v1/namespaces/default: (2.696678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.943832  111245 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.970989ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:10.946793  111245 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.280261ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.007053  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (2.777092ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.032450  111245 cache.go:669] Couldn't expire cache for pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-w-pv-prebound-w-provisioned. Binding is still in progress.
I0911 19:19:11.106800  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (2.658691ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.109286  111245 scheduler_binder.go:546] All PVCs for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-w-pv-prebound-w-provisioned" are bound
I0911 19:19:11.109420  111245 factory.go:606] Attempting to bind pod-w-pv-prebound-w-provisioned to node-1
I0911 19:19:11.112538  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned/binding: (2.728996ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.112972  111245 scheduler.go:667] pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-w-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 19:19:11.116338  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (2.826783ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.205940  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-w-pv-prebound-w-provisioned: (1.754645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.207862  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-w-pv-prebound: (1.377498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.209487  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.17427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.211026  111245 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (1.043434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.215859  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (4.241741ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.219823  111245 pv_controller_base.go:258] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" deleted
I0911 19:19:11.219859  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" with version 59075
I0911 19:19:11.219885  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 5421fdf0-fedf-470b-9ed9-3100b7e7967b)", boundByController: true
I0911 19:19:11.219894  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:11.221080  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (972.319µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:11.221332  111245 pv_controller.go:547] synchronizing PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision not found
I0911 19:19:11.221409  111245 pv_controller.go:575] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" is released and reclaim policy "Delete" will be executed
I0911 19:19:11.221449  111245 pv_controller.go:777] updating PersistentVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: set phase Released
I0911 19:19:11.222246  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (5.838196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.222554  111245 pv_controller_base.go:258] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" deleted
I0911 19:19:11.223538  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b/status: (1.812138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:11.223853  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" with version 59094
I0911 19:19:11.223876  111245 pv_controller.go:798] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" entered phase "Released"
I0911 19:19:11.223885  111245 pv_controller.go:1022] reclaimVolume[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: policy is Delete
I0911 19:19:11.223907  111245 pv_controller.go:1631] scheduleOperation[delete-pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b[456b23de-f413-46bd-91ab-77b76737e857]]
I0911 19:19:11.223927  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59069
I0911 19:19:11.223946  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound (uid: d4f5099d-a87e-4422-91b7-d2f138875da7)", boundByController: false
I0911 19:19:11.223954  111245 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound
I0911 19:19:11.223970  111245 pv_controller.go:547] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound not found
I0911 19:19:11.223980  111245 pv_controller.go:575] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I0911 19:19:11.223986  111245 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Released
I0911 19:19:11.224069  111245 pv_controller.go:1146] deleteVolumeOperation [pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b] started
I0911 19:19:11.225830  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b: (1.142595ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.226148  111245 pv_controller.go:1250] isVolumeReleased[pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: volume is released
I0911 19:19:11.226171  111245 pv_controller.go:1285] doDeleteVolume [pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]
I0911 19:19:11.226195  111245 pv_controller.go:1316] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" deleted
I0911 19:19:11.226202  111245 pv_controller.go:1193] deleteVolumeOperation [pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b]: success
I0911 19:19:11.229036  111245 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b: (2.620611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.229703  111245 store.go:228] deletion of /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b failed because of a conflict, going to retry
I0911 19:19:11.229805  111245 store.go:362] GuaranteedUpdate of /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pv-w-prebound failed because of a conflict, going to retry
I0911 19:19:11.229920  111245 httplog.go:90] DELETE /api/v1/persistentvolumes: (7.214999ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.230024  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (5.823858ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0911 19:19:11.230310  111245 pv_controller.go:790] updating PersistentVolume[pv-w-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": StorageError: invalid object, Code: 4, Key: /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pv-w-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d118a907-cb63-4db4-b660-a2dc37b0f2eb, UID in object meta: 
I0911 19:19:11.230436  111245 pv_controller_base.go:202] could not sync volume "pv-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": StorageError: invalid object, Code: 4, Key: /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pv-w-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d118a907-cb63-4db4-b660-a2dc37b0f2eb, UID in object meta: 
I0911 19:19:11.230508  111245 pv_controller_base.go:212] volume "pvc-5421fdf0-fedf-470b-9ed9-3100b7e7967b" deleted
I0911 19:19:11.230592  111245 pv_controller_base.go:212] volume "pv-w-prebound" deleted
I0911 19:19:11.230641  111245 pv_controller_base.go:396] deletion of claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" was already processed
I0911 19:19:11.230696  111245 pv_controller_base.go:396] deletion of claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-pv-prebound" was already processed
I0911 19:19:11.237258  111245 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.985738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.237436  111245 volume_binding_test.go:751] Running test immediate provisioned by controller
I0911 19:19:11.238616  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (972.262µs) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.239903  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (914.959µs) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.241293  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.023213ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.243024  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (1.321252ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.243334  111245 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned", version 59103
I0911 19:19:11.243398  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:11.243423  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: no volume found
I0911 19:19:11.243434  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: started
I0911 19:19:11.243453  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned[23f7b9fe-1a7d-4207-a975-c36644413fa1]]
I0911 19:19:11.243505  111245 pv_controller.go:1372] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned] started, class: "immediate-nvnv"
I0911 19:19:11.245061  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-controller-provisioned: (1.305275ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.245193  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (1.601073ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.245307  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" with version 59104
I0911 19:19:11.245487  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound
I0911 19:19:11.245519  111245 scheduler.go:530] Attempting to schedule pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound
I0911 19:19:11.245606  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" with version 59104
I0911 19:19:11.245638  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:11.245661  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: no volume found
I0911 19:19:11.245671  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: started
I0911 19:19:11.245688  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned[23f7b9fe-1a7d-4207-a975-c36644413fa1]]
E0911 19:19:11.245693  111245 factory.go:557] Error scheduling volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound: pod has unbound immediate PersistentVolumeClaims; retrying
I0911 19:19:11.245696  111245 pv_controller.go:1642] operation "provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned[23f7b9fe-1a7d-4207-a975-c36644413fa1]" is already running, skipping
I0911 19:19:11.245731  111245 factory.go:615] Updating pod condition for volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound to (PodScheduled==False, Reason=Unschedulable)
I0911 19:19:11.246624  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1: (1.027077ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.246902  111245 pv_controller.go:1476] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" created
I0911 19:19:11.246925  111245 pv_controller.go:1493] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: trying to save volume pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1
I0911 19:19:11.246947  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (789.544µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:11.247504  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound/status: (1.532817ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
E0911 19:19:11.247818  111245 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0911 19:19:11.248111  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.860382ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46124]
I0911 19:19:11.248504  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1", version 59108
I0911 19:19:11.248357  111245 httplog.go:90] POST /api/v1/persistentvolumes: (1.271451ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46106]
I0911 19:19:11.248539  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned (uid: 23f7b9fe-1a7d-4207-a975-c36644413fa1)", boundByController: true
I0911 19:19:11.248550  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned
I0911 19:19:11.248592  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:11.248608  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:11.248634  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" with version 59104
I0911 19:19:11.248644  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:11.248683  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" found: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned (uid: 23f7b9fe-1a7d-4207-a975-c36644413fa1)", boundByController: true
I0911 19:19:11.248693  111245 pv_controller.go:931] binding volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.248702  111245 pv_controller.go:829] updating PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.248719  111245 pv_controller.go:841] updating PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.248721  111245 pv_controller.go:1501] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" saved
I0911 19:19:11.248729  111245 pv_controller.go:777] updating PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: set phase Bound
I0911 19:19:11.248740  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" with version 59108
I0911 19:19:11.248761  111245 pv_controller.go:1554] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" provisioned for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.248860  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-controller-provisioned", UID:"23f7b9fe-1a7d-4207-a975-c36644413fa1", APIVersion:"v1", ResourceVersion:"59104", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1 using kubernetes.io/mock-provisioner
I0911 19:19:11.250150  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.207786ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:11.250333  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1/status: (1.417789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.250572  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" with version 59110
I0911 19:19:11.250649  111245 pv_controller.go:798] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" entered phase "Bound"
I0911 19:19:11.250694  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: binding to "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1"
I0911 19:19:11.250744  111245 pv_controller.go:901] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.250800  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" with version 59110
I0911 19:19:11.250830  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned (uid: 23f7b9fe-1a7d-4207-a975-c36644413fa1)", boundByController: true
I0911 19:19:11.250839  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned
I0911 19:19:11.250852  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:11.250861  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:11.252258  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-controller-provisioned: (1.21758ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.252499  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" with version 59111
I0911 19:19:11.252531  111245 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: bound to "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1"
I0911 19:19:11.252542  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned] status: set phase Bound
I0911 19:19:11.253955  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-controller-provisioned/status: (1.217902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.254130  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" with version 59112
I0911 19:19:11.254156  111245 pv_controller.go:742] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" entered phase "Bound"
I0911 19:19:11.254168  111245 pv_controller.go:957] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.254190  111245 pv_controller.go:958] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned (uid: 23f7b9fe-1a7d-4207-a975-c36644413fa1)", boundByController: true
I0911 19:19:11.254215  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1", bindCompleted: true, boundByController: true
I0911 19:19:11.254245  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" with version 59112
I0911 19:19:11.254255  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: phase: Bound, bound to: "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1", bindCompleted: true, boundByController: true
I0911 19:19:11.254269  111245 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" found: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned (uid: 23f7b9fe-1a7d-4207-a975-c36644413fa1)", boundByController: true
I0911 19:19:11.254277  111245 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: claim is already correctly bound
I0911 19:19:11.254283  111245 pv_controller.go:931] binding volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.254290  111245 pv_controller.go:829] updating PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.254303  111245 pv_controller.go:841] updating PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.254309  111245 pv_controller.go:777] updating PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: set phase Bound
I0911 19:19:11.254315  111245 pv_controller.go:780] updating PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: phase Bound already set
I0911 19:19:11.254322  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: binding to "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1"
I0911 19:19:11.254335  111245 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned]: already bound to "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1"
I0911 19:19:11.254341  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned] status: set phase Bound
I0911 19:19:11.254354  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned] status: phase Bound already set
I0911 19:19:11.254383  111245 pv_controller.go:957] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned"
I0911 19:19:11.254405  111245 pv_controller.go:958] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned (uid: 23f7b9fe-1a7d-4207-a975-c36644413fa1)", boundByController: true
I0911 19:19:11.254417  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1", bindCompleted: true, boundByController: true
I0911 19:19:11.347833  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.802842ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.447792  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.707351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.547696  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.777608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.647595  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.665235ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.747825  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.791821ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.847703  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.748705ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:11.947764  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.783239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.047629  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.617518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.147616  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.588823ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.247577  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.652517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.347819  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.800966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.447844  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.82948ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.547696  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.741155ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.647750  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.833651ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.747712  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.731377ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.847848  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.893502ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:12.947830  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.52306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.032792  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound
I0911 19:19:13.032828  111245 scheduler.go:530] Attempting to schedule pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound
I0911 19:19:13.033045  111245 scheduler_binder.go:652] All bound volumes for Pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound" match with Node "node-1"
I0911 19:19:13.033143  111245 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound", node "node-1"
I0911 19:19:13.033168  111245 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound", node "node-1": all PVCs bound and nothing to do
I0911 19:19:13.033221  111245 factory.go:606] Attempting to bind pod-i-unbound to node-1
I0911 19:19:13.036607  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound/binding: (2.979769ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.036908  111245 scheduler.go:667] pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-unbound is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 19:19:13.039492  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.975544ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.047504  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-i-unbound: (1.407789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.049954  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-controller-provisioned: (1.817735ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.054310  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (3.947559ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.059321  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (4.64745ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.059575  111245 pv_controller_base.go:258] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" deleted
I0911 19:19:13.059625  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" with version 59110
I0911 19:19:13.059660  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned (uid: 23f7b9fe-1a7d-4207-a975-c36644413fa1)", boundByController: true
I0911 19:19:13.059752  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned
I0911 19:19:13.060731  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-controller-provisioned: (753.142µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.060969  111245 pv_controller.go:547] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned not found
I0911 19:19:13.060999  111245 pv_controller.go:575] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" is released and reclaim policy "Delete" will be executed
I0911 19:19:13.061012  111245 pv_controller.go:777] updating PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: set phase Released
I0911 19:19:13.062819  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1/status: (1.564856ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.063331  111245 store.go:228] deletion of /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1 failed because of a conflict, going to retry
I0911 19:19:13.063530  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" with version 59126
I0911 19:19:13.063574  111245 pv_controller.go:798] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" entered phase "Released"
I0911 19:19:13.063586  111245 pv_controller.go:1022] reclaimVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: policy is Delete
I0911 19:19:13.063608  111245 pv_controller.go:1631] scheduleOperation[delete-pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1[858d2feb-d8b0-4b4f-8bfc-f5179427c507]]
I0911 19:19:13.063636  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" with version 59126
I0911 19:19:13.063661  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: phase: Released, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned (uid: 23f7b9fe-1a7d-4207-a975-c36644413fa1)", boundByController: true
I0911 19:19:13.063682  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned
I0911 19:19:13.063705  111245 pv_controller.go:547] synchronizing PersistentVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned not found
I0911 19:19:13.063712  111245 pv_controller.go:1022] reclaimVolume[pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1]: policy is Delete
I0911 19:19:13.063720  111245 pv_controller.go:1631] scheduleOperation[delete-pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1[858d2feb-d8b0-4b4f-8bfc-f5179427c507]]
I0911 19:19:13.063726  111245 pv_controller.go:1642] operation "delete-pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1[858d2feb-d8b0-4b4f-8bfc-f5179427c507]" is already running, skipping
I0911 19:19:13.063759  111245 pv_controller.go:1146] deleteVolumeOperation [pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1] started
I0911 19:19:13.064762  111245 httplog.go:90] DELETE /api/v1/persistentvolumes: (5.08637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.064773  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1: (773.291µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.065031  111245 pv_controller_base.go:212] volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" deleted
I0911 19:19:13.065078  111245 pv_controller_base.go:396] deletion of claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-controller-provisioned" was already processed
I0911 19:19:13.065145  111245 pv_controller.go:1153] error reading persistent volume "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1": persistentvolumes "pvc-23f7b9fe-1a7d-4207-a975-c36644413fa1" not found
I0911 19:19:13.071511  111245 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.332279ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.071697  111245 volume_binding_test.go:751] Running test wait provisioned
I0911 19:19:13.072938  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.012175ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.074463  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.202805ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.075810  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (999.401µs) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.077429  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (1.261058ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.077579  111245 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision", version 59134
I0911 19:19:13.077597  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:13.077617  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:13.077643  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Pending
I0911 19:19:13.077655  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: phase Pending already set
I0911 19:19:13.077670  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-canprovision", UID:"719ddfa0-4561-4a45-8125-c9598adfe364", APIVersion:"v1", ResourceVersion:"59134", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 19:19:13.079088  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.206009ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.079920  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (2.118432ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.080265  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canprovision
I0911 19:19:13.080287  111245 scheduler.go:530] Attempting to schedule pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canprovision
I0911 19:19:13.080440  111245 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canprovision", PVC "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" on node "node-1"
I0911 19:19:13.080458  111245 scheduler_binder.go:734] Provisioning for claims of pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canprovision" that has no matching volumes on node "node-1" ...
I0911 19:19:13.080515  111245 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canprovision", node "node-1"
I0911 19:19:13.080541  111245 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision", version 59134
I0911 19:19:13.080581  111245 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canprovision", node "node-1"
I0911 19:19:13.082112  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.308746ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.082246  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59137
I0911 19:19:13.082271  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:13.082292  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:13.082301  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: started
I0911 19:19:13.082316  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[719ddfa0-4561-4a45-8125-c9598adfe364]]
I0911 19:19:13.082397  111245 pv_controller.go:1372] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] started, class: "wait-x47h"
I0911 19:19:13.083909  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.277531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.084127  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59138
I0911 19:19:13.084157  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59138
I0911 19:19:13.084176  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:13.084190  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:13.084196  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: started
I0911 19:19:13.084208  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[719ddfa0-4561-4a45-8125-c9598adfe364]]
I0911 19:19:13.084214  111245 pv_controller.go:1642] operation "provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[719ddfa0-4561-4a45-8125-c9598adfe364]" is already running, skipping
I0911 19:19:13.085163  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-719ddfa0-4561-4a45-8125-c9598adfe364: (742.875µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.085491  111245 pv_controller.go:1476] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" created
I0911 19:19:13.085516  111245 pv_controller.go:1493] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: trying to save volume pvc-719ddfa0-4561-4a45-8125-c9598adfe364
I0911 19:19:13.086850  111245 httplog.go:90] POST /api/v1/persistentvolumes: (1.081793ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.087015  111245 pv_controller.go:1501] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" saved
I0911 19:19:13.087041  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364", version 59139
I0911 19:19:13.087059  111245 pv_controller.go:1554] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" provisioned for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.087194  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-canprovision", UID:"719ddfa0-4561-4a45-8125-c9598adfe364", APIVersion:"v1", ResourceVersion:"59138", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-719ddfa0-4561-4a45-8125-c9598adfe364 using kubernetes.io/mock-provisioner
I0911 19:19:13.087306  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" with version 59139
I0911 19:19:13.087350  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 719ddfa0-4561-4a45-8125-c9598adfe364)", boundByController: true
I0911 19:19:13.087428  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:13.087474  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:13.087527  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:13.087607  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59138
I0911 19:19:13.087631  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:13.087723  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" found: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 719ddfa0-4561-4a45-8125-c9598adfe364)", boundByController: true
I0911 19:19:13.087746  111245 pv_controller.go:931] binding volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.087759  111245 pv_controller.go:829] updating PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.087839  111245 pv_controller.go:841] updating PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.087860  111245 pv_controller.go:777] updating PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: set phase Bound
I0911 19:19:13.088424  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.176012ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:13.089581  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-719ddfa0-4561-4a45-8125-c9598adfe364/status: (1.385736ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.089773  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" with version 59141
I0911 19:19:13.089799  111245 pv_controller.go:798] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" entered phase "Bound"
I0911 19:19:13.089813  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" with version 59141
I0911 19:19:13.089814  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: binding to "pvc-719ddfa0-4561-4a45-8125-c9598adfe364"
I0911 19:19:13.089835  111245 pv_controller.go:901] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.089838  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 719ddfa0-4561-4a45-8125-c9598adfe364)", boundByController: true
I0911 19:19:13.089852  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:13.089866  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:13.089881  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:13.091313  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.291128ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.091514  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59142
I0911 19:19:13.091538  111245 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: bound to "pvc-719ddfa0-4561-4a45-8125-c9598adfe364"
I0911 19:19:13.091546  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Bound
I0911 19:19:13.093268  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision/status: (1.565909ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.093527  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59143
I0911 19:19:13.093554  111245 pv_controller.go:742] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" entered phase "Bound"
I0911 19:19:13.093565  111245 pv_controller.go:957] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.093581  111245 pv_controller.go:958] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 719ddfa0-4561-4a45-8125-c9598adfe364)", boundByController: true
I0911 19:19:13.093592  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-719ddfa0-4561-4a45-8125-c9598adfe364", bindCompleted: true, boundByController: true
I0911 19:19:13.093621  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59143
I0911 19:19:13.093635  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Bound, bound to: "pvc-719ddfa0-4561-4a45-8125-c9598adfe364", bindCompleted: true, boundByController: true
I0911 19:19:13.093653  111245 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" found: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 719ddfa0-4561-4a45-8125-c9598adfe364)", boundByController: true
I0911 19:19:13.093660  111245 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: claim is already correctly bound
I0911 19:19:13.093667  111245 pv_controller.go:931] binding volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.093674  111245 pv_controller.go:829] updating PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.093687  111245 pv_controller.go:841] updating PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.093694  111245 pv_controller.go:777] updating PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: set phase Bound
I0911 19:19:13.093699  111245 pv_controller.go:780] updating PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: phase Bound already set
I0911 19:19:13.093705  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: binding to "pvc-719ddfa0-4561-4a45-8125-c9598adfe364"
I0911 19:19:13.093718  111245 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: already bound to "pvc-719ddfa0-4561-4a45-8125-c9598adfe364"
I0911 19:19:13.093725  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Bound
I0911 19:19:13.093738  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: phase Bound already set
I0911 19:19:13.093745  111245 pv_controller.go:957] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:13.093757  111245 pv_controller.go:958] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 719ddfa0-4561-4a45-8125-c9598adfe364)", boundByController: true
I0911 19:19:13.093777  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-719ddfa0-4561-4a45-8125-c9598adfe364", bindCompleted: true, boundByController: true
I0911 19:19:13.182183  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.497618ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.282299  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.604888ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.382596  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.877015ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.482338  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.610415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.582239  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.4563ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.682053  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.324453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.782347  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.669678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.882164  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.52321ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:13.982192  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.502895ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.033169  111245 cache.go:669] Couldn't expire cache for pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canprovision. Binding is still in progress.
I0911 19:19:14.082459  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.745715ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.082469  111245 scheduler_binder.go:546] All PVCs for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canprovision" are bound
I0911 19:19:14.082732  111245 factory.go:606] Attempting to bind pod-pvc-canprovision to node-1
I0911 19:19:14.085279  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision/binding: (2.255615ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.085646  111245 scheduler.go:667] pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canprovision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 19:19:14.087619  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.567696ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.182300  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canprovision: (1.606943ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.184453  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.55374ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.189678  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (4.730594ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.193681  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (3.341439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.193995  111245 pv_controller_base.go:258] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" deleted
I0911 19:19:14.194030  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" with version 59141
I0911 19:19:14.194055  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 719ddfa0-4561-4a45-8125-c9598adfe364)", boundByController: true
I0911 19:19:14.194063  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:14.195139  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (916.841µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.195464  111245 pv_controller.go:547] synchronizing PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision not found
I0911 19:19:14.195493  111245 pv_controller.go:575] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" is released and reclaim policy "Delete" will be executed
I0911 19:19:14.195505  111245 pv_controller.go:777] updating PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: set phase Released
I0911 19:19:14.197067  111245 store.go:362] GuaranteedUpdate of /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pvc-719ddfa0-4561-4a45-8125-c9598adfe364 failed because of a conflict, going to retry
I0911 19:19:14.197274  111245 httplog.go:90] DELETE /api/v1/persistentvolumes: (3.031418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.197398  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-719ddfa0-4561-4a45-8125-c9598adfe364/status: (1.65703ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.197590  111245 pv_controller.go:790] updating PersistentVolume[pvc-719ddfa0-4561-4a45-8125-c9598adfe364]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pvc-719ddfa0-4561-4a45-8125-c9598adfe364": StorageError: invalid object, Code: 4, Key: /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pvc-719ddfa0-4561-4a45-8125-c9598adfe364, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 027b5d05-694f-4f90-8a08-cd1ae0582e94, UID in object meta: 
I0911 19:19:14.197660  111245 pv_controller_base.go:202] could not sync volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364": Operation cannot be fulfilled on persistentvolumes "pvc-719ddfa0-4561-4a45-8125-c9598adfe364": StorageError: invalid object, Code: 4, Key: /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pvc-719ddfa0-4561-4a45-8125-c9598adfe364, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 027b5d05-694f-4f90-8a08-cd1ae0582e94, UID in object meta: 
I0911 19:19:14.197713  111245 pv_controller_base.go:212] volume "pvc-719ddfa0-4561-4a45-8125-c9598adfe364" deleted
I0911 19:19:14.197797  111245 pv_controller_base.go:396] deletion of claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" was already processed
I0911 19:19:14.204020  111245 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.285738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.204225  111245 volume_binding_test.go:751] Running test topolgy unsatisfied
I0911 19:19:14.205710  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.230628ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.207119  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.06996ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.208593  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.15219ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.210285  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (1.254996ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.210433  111245 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-topomismatch", version 59159
I0911 19:19:14.210458  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-topomismatch]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.210474  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-topomismatch]: no volume found
I0911 19:19:14.210498  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-topomismatch] status: set phase Pending
I0911 19:19:14.210510  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-topomismatch] status: phase Pending already set
I0911 19:19:14.210565  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-topomismatch", UID:"524cd1bf-e55e-46c2-aeaf-899ca6e3fb53", APIVersion:"v1", ResourceVersion:"59159", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 19:19:14.212241  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (1.256026ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.212578  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-topomismatch
I0911 19:19:14.212601  111245 scheduler.go:530] Attempting to schedule pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-topomismatch
I0911 19:19:14.212712  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (2.006442ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.212858  111245 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-topomismatch", PVC "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-topomismatch" on node "node-1"
I0911 19:19:14.212948  111245 scheduler_binder.go:724] Node "node-1" cannot satisfy provisioning topology requirements of claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-topomismatch"
I0911 19:19:14.213034  111245 factory.go:541] Unable to schedule volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-topomismatch: no fit: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.; waiting
I0911 19:19:14.213113  111245 factory.go:615] Updating pod condition for volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-topomismatch to (PodScheduled==False, Reason=Unschedulable)
I0911 19:19:14.214716  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-topomismatch/status: (1.322048ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.214716  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-topomismatch: (930.808µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.215198  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.235816ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.216156  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-topomismatch: (914.191µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46122]
I0911 19:19:14.216504  111245 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-topomismatch on any node.
I0911 19:19:14.314772  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-topomismatch: (1.864843ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.316452  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-topomismatch: (1.243944ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.320039  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-topomismatch
I0911 19:19:14.320076  111245 scheduler.go:526] Skip schedule deleting pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-topomismatch
I0911 19:19:14.321000  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (4.163019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.321884  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.202332ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.324238  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (2.94664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.324590  111245 pv_controller_base.go:258] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-topomismatch" deleted
I0911 19:19:14.325955  111245 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.136368ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.332960  111245 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.612869ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.333150  111245 volume_binding_test.go:751] Running test wait one bound, one provisioned
I0911 19:19:14.334694  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.358611ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.336153  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.154809ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.338687  111245 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.157092ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.340300  111245 httplog.go:90] POST /api/v1/persistentvolumes: (1.245761ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.340658  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind", version 59174
I0911 19:19:14.340691  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I0911 19:19:14.340706  111245 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0911 19:19:14.340712  111245 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0911 19:19:14.341994  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (1.050812ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.342262  111245 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind", version 59175
I0911 19:19:14.342354  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.342498  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: no volume found
I0911 19:19:14.342553  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind] status: set phase Pending
I0911 19:19:14.342604  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind] status: phase Pending already set
I0911 19:19:14.342870  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-w-canbind", UID:"b2e0689b-9c24-424d-bcd9-c71a24f2ab8d", APIVersion:"v1", ResourceVersion:"59175", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 19:19:14.344020  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (1.658536ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.344289  111245 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision", version 59176
I0911 19:19:14.344311  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.344328  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:14.344349  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Pending
I0911 19:19:14.344438  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: phase Pending already set
I0911 19:19:14.344491  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-canprovision", UID:"8679464e-c14a-455c-b46c-5a6fcbeb9ea6", APIVersion:"v1", ResourceVersion:"59176", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 19:19:14.344697  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.610825ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46154]
I0911 19:19:14.346593  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.512538ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46154]
I0911 19:19:14.346932  111245 scheduling_queue.go:830] About to try and schedule pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision
I0911 19:19:14.347061  111245 scheduler.go:530] Attempting to schedule pod: volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision
I0911 19:19:14.347267  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (6.308864ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.347293  111245 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision", PVC "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" on node "node-1"
I0911 19:19:14.347550  111245 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision", PVC "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" on node "node-1"
I0911 19:19:14.347610  111245 scheduler_binder.go:734] Provisioning for claims of pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision" that has no matching volumes on node "node-1" ...
I0911 19:19:14.347695  111245 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision", node "node-1"
I0911 19:19:14.347744  111245 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind", version 59175
I0911 19:19:14.347779  111245 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision", version 59176
I0911 19:19:14.347855  111245 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision", node "node-1"
I0911 19:19:14.347754  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 59180
I0911 19:19:14.347988  111245 pv_controller.go:798] volume "pv-w-canbind" entered phase "Available"
I0911 19:19:14.348010  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 59180
I0911 19:19:14.348022  111245 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I0911 19:19:14.348036  111245 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0911 19:19:14.348100  111245 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0911 19:19:14.348139  111245 pv_controller.go:780] updating PersistentVolume[pv-w-canbind]: phase Available already set
I0911 19:19:14.348150  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (3.404611ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.349672  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-w-canbind: (1.497648ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46152]
I0911 19:19:14.349897  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" with version 59181
I0911 19:19:14.349925  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.349943  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: no volume found
I0911 19:19:14.349949  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: started
I0911 19:19:14.349964  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind[b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]]
I0911 19:19:14.350001  111245 pv_controller.go:1372] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind] started, class: "wait-vpgg"
I0911 19:19:14.351757  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.561459ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.351833  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-w-canbind: (1.631226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46154]
I0911 19:19:14.352279  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59182
I0911 19:19:14.352352  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" with version 59183
I0911 19:19:14.352406  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.352691  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:14.352752  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: started
I0911 19:19:14.352788  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[8679464e-c14a-455c-b46c-5a6fcbeb9ea6]]
I0911 19:19:14.352830  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" with version 59183
I0911 19:19:14.352879  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.352943  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: no volume found
I0911 19:19:14.352983  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: started
I0911 19:19:14.353042  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind[b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]]
I0911 19:19:14.353080  111245 pv_controller.go:1642] operation "provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind[b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]" is already running, skipping
I0911 19:19:14.352882  111245 pv_controller.go:1372] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] started, class: "wait-vpgg"
I0911 19:19:14.353672  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d: (1.006139ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46154]
I0911 19:19:14.353864  111245 pv_controller.go:1476] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" created
I0911 19:19:14.353887  111245 pv_controller.go:1493] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: trying to save volume pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d
I0911 19:19:14.355267  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.846221ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.355556  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59184
I0911 19:19:14.355688  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d", version 59185
I0911 19:19:14.355733  111245 httplog.go:90] POST /api/v1/persistentvolumes: (1.651698ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46154]
I0911 19:19:14.355735  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind (uid: b2e0689b-9c24-424d-bcd9-c71a24f2ab8d)", boundByController: true
I0911 19:19:14.355754  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind
I0911 19:19:14.355769  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.355784  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:14.355694  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59184
I0911 19:19:14.355881  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.355994  111245 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: no volume found
I0911 19:19:14.356056  111245 pv_controller.go:1326] provisionClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: started
I0911 19:19:14.356116  111245 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[8679464e-c14a-455c-b46c-5a6fcbeb9ea6]]
I0911 19:19:14.356154  111245 pv_controller.go:1642] operation "provision-volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision[8679464e-c14a-455c-b46c-5a6fcbeb9ea6]" is already running, skipping
I0911 19:19:14.356213  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" with version 59183
I0911 19:19:14.356260  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.356337  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" found: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind (uid: b2e0689b-9c24-424d-bcd9-c71a24f2ab8d)", boundByController: true
I0911 19:19:14.356427  111245 pv_controller.go:931] binding volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.356479  111245 pv_controller.go:829] updating PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.356527  111245 pv_controller.go:841] updating PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.356578  111245 pv_controller.go:777] updating PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: set phase Bound
I0911 19:19:14.356620  111245 pv_controller.go:1501] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" saved
I0911 19:19:14.356699  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" with version 59185
I0911 19:19:14.356925  111245 pv_controller.go:1554] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" provisioned for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.357120  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-w-canbind", UID:"b2e0689b-9c24-424d-bcd9-c71a24f2ab8d", APIVersion:"v1", ResourceVersion:"59183", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d using kubernetes.io/mock-provisioner
I0911 19:19:14.356881  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6: (1.060257ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.357398  111245 pv_controller.go:1476] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" created
I0911 19:19:14.357423  111245 pv_controller.go:1493] provisionClaimOperation [volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: trying to save volume pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6
I0911 19:19:14.358934  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.433031ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.359112  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" with version 59187
I0911 19:19:14.359153  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind (uid: b2e0689b-9c24-424d-bcd9-c71a24f2ab8d)", boundByController: true
I0911 19:19:14.359165  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind
I0911 19:19:14.359181  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.359196  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:14.359338  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d/status: (1.649911ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46154]
I0911 19:19:14.359411  111245 httplog.go:90] POST /api/v1/persistentvolumes: (1.72432ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46156]
I0911 19:19:14.359443  111245 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6", version 59188
I0911 19:19:14.359503  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 8679464e-c14a-455c-b46c-5a6fcbeb9ea6)", boundByController: true
I0911 19:19:14.359524  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:14.359539  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.359561  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:14.359637  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" with version 59187
I0911 19:19:14.359671  111245 pv_controller.go:798] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" entered phase "Bound"
I0911 19:19:14.359681  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: binding to "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d"
I0911 19:19:14.359691  111245 pv_controller.go:1501] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" saved
I0911 19:19:14.359696  111245 pv_controller.go:901] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.359714  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" with version 59188
I0911 19:19:14.359738  111245 pv_controller.go:1554] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" provisioned for claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.359877  111245 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a", Name:"pvc-canprovision", UID:"8679464e-c14a-455c-b46c-5a6fcbeb9ea6", APIVersion:"v1", ResourceVersion:"59184", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6 using kubernetes.io/mock-provisioner
I0911 19:19:14.361264  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.238508ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46158]
I0911 19:19:14.361602  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-w-canbind: (1.61086ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.361836  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" with version 59190
I0911 19:19:14.361907  111245 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: bound to "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d"
I0911 19:19:14.361941  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind] status: set phase Bound
I0911 19:19:14.363476  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-w-canbind/status: (1.324659ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.363735  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" with version 59191
I0911 19:19:14.363838  111245 pv_controller.go:742] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" entered phase "Bound"
I0911 19:19:14.363899  111245 pv_controller.go:957] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.363950  111245 pv_controller.go:958] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind (uid: b2e0689b-9c24-424d-bcd9-c71a24f2ab8d)", boundByController: true
I0911 19:19:14.363999  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d", bindCompleted: true, boundByController: true
I0911 19:19:14.364077  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59184
I0911 19:19:14.364157  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.364223  111245 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" found: phase: Pending, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 8679464e-c14a-455c-b46c-5a6fcbeb9ea6)", boundByController: true
I0911 19:19:14.364278  111245 pv_controller.go:931] binding volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.364323  111245 pv_controller.go:829] updating PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.364389  111245 pv_controller.go:841] updating PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.364433  111245 pv_controller.go:777] updating PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: set phase Bound
I0911 19:19:14.366212  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" with version 59192
I0911 19:19:14.366250  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 8679464e-c14a-455c-b46c-5a6fcbeb9ea6)", boundByController: true
I0911 19:19:14.366263  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:14.366280  111245 pv_controller.go:555] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 19:19:14.366294  111245 pv_controller.go:603] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: volume not bound yet, waiting for syncClaim to fix it
I0911 19:19:14.366217  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6/status: (1.476854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.366491  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" with version 59192
I0911 19:19:14.366516  111245 pv_controller.go:798] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" entered phase "Bound"
I0911 19:19:14.366527  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: binding to "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6"
I0911 19:19:14.366540  111245 pv_controller.go:901] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.368111  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (1.390515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.368320  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59193
I0911 19:19:14.368348  111245 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: bound to "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6"
I0911 19:19:14.368390  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Bound
I0911 19:19:14.370038  111245 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision/status: (1.436025ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.370287  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59194
I0911 19:19:14.370375  111245 pv_controller.go:742] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" entered phase "Bound"
I0911 19:19:14.370426  111245 pv_controller.go:957] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.370493  111245 pv_controller.go:958] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 8679464e-c14a-455c-b46c-5a6fcbeb9ea6)", boundByController: true
I0911 19:19:14.370546  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6", bindCompleted: true, boundByController: true
I0911 19:19:14.370617  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" with version 59191
I0911 19:19:14.370675  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: phase: Bound, bound to: "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d", bindCompleted: true, boundByController: true
I0911 19:19:14.370723  111245 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" found: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind (uid: b2e0689b-9c24-424d-bcd9-c71a24f2ab8d)", boundByController: true
I0911 19:19:14.370772  111245 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: claim is already correctly bound
I0911 19:19:14.370810  111245 pv_controller.go:931] binding volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.370857  111245 pv_controller.go:829] updating PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.370900  111245 pv_controller.go:841] updating PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.370946  111245 pv_controller.go:777] updating PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: set phase Bound
I0911 19:19:14.370982  111245 pv_controller.go:780] updating PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: phase Bound already set
I0911 19:19:14.371030  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: binding to "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d"
I0911 19:19:14.371081  111245 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind]: already bound to "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d"
I0911 19:19:14.371123  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind] status: set phase Bound
I0911 19:19:14.371169  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind] status: phase Bound already set
I0911 19:19:14.371220  111245 pv_controller.go:957] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind"
I0911 19:19:14.371270  111245 pv_controller.go:958] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind (uid: b2e0689b-9c24-424d-bcd9-c71a24f2ab8d)", boundByController: true
I0911 19:19:14.371318  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d", bindCompleted: true, boundByController: true
I0911 19:19:14.371385  111245 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" with version 59194
I0911 19:19:14.371438  111245 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: phase: Bound, bound to: "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6", bindCompleted: true, boundByController: true
I0911 19:19:14.371490  111245 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" found: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 8679464e-c14a-455c-b46c-5a6fcbeb9ea6)", boundByController: true
I0911 19:19:14.371540  111245 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: claim is already correctly bound
I0911 19:19:14.371584  111245 pv_controller.go:931] binding volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.371630  111245 pv_controller.go:829] updating PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: binding to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.371723  111245 pv_controller.go:841] updating PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: already bound to "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.371772  111245 pv_controller.go:777] updating PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: set phase Bound
I0911 19:19:14.371809  111245 pv_controller.go:780] updating PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: phase Bound already set
I0911 19:19:14.371851  111245 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: binding to "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6"
I0911 19:19:14.371902  111245 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision]: already bound to "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6"
I0911 19:19:14.371946  111245 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: set phase Bound
I0911 19:19:14.371999  111245 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision] status: phase Bound already set
I0911 19:19:14.372047  111245 pv_controller.go:957] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" bound to claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision"
I0911 19:19:14.372099  111245 pv_controller.go:958] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" status after binding: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 8679464e-c14a-455c-b46c-5a6fcbeb9ea6)", boundByController: true
I0911 19:19:14.372156  111245 pv_controller.go:959] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6", bindCompleted: true, boundByController: true
I0911 19:19:14.450559  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.602659ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.550609  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.714553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.650451  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.574188ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.750749  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.79692ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.850482  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.633271ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:14.950855  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.942781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.033349  111245 cache.go:669] Couldn't expire cache for pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision. Binding is still in progress.
I0911 19:19:15.050717  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.753921ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.150649  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.681611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.250485  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.55236ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.350559  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.709912ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.352533  111245 scheduler_binder.go:546] All PVCs for pod "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision" are bound
I0911 19:19:15.352619  111245 factory.go:606] Attempting to bind pod-pvc-canbind-or-provision to node-1
I0911 19:19:15.354758  111245 httplog.go:90] POST /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision/binding: (1.90343ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.355100  111245 scheduler.go:667] pod volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-pvc-canbind-or-provision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 19:19:15.356704  111245 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/events: (1.309187ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.450513  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods/pod-pvc-canbind-or-provision: (1.682994ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.452286  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-w-canbind: (1.276262ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.453652  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (955.625µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.454828  111245 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (847.791µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.459605  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (4.45431ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.463511  111245 pv_controller_base.go:258] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" deleted
I0911 19:19:15.463641  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" with version 59192
I0911 19:19:15.463723  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision (uid: 8679464e-c14a-455c-b46c-5a6fcbeb9ea6)", boundByController: true
I0911 19:19:15.463793  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision
I0911 19:19:15.464931  111245 pv_controller_base.go:258] claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" deleted
I0911 19:19:15.465083  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (5.100407ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.465152  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-canprovision: (719.458µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46158]
I0911 19:19:15.465436  111245 pv_controller.go:547] synchronizing PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision not found
I0911 19:19:15.465466  111245 pv_controller.go:575] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" is released and reclaim policy "Delete" will be executed
I0911 19:19:15.465479  111245 pv_controller.go:777] updating PersistentVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: set phase Released
I0911 19:19:15.467675  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6/status: (1.974579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.467856  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" with version 59205
I0911 19:19:15.467879  111245 pv_controller.go:798] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" entered phase "Released"
I0911 19:19:15.467888  111245 pv_controller.go:1022] reclaimVolume[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: policy is Delete
I0911 19:19:15.467905  111245 pv_controller.go:1631] scheduleOperation[delete-pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6[bbe8572b-b24e-4abb-9c23-cc94b053f2ba]]
I0911 19:19:15.467926  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" with version 59187
I0911 19:19:15.467945  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: phase: Bound, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind (uid: b2e0689b-9c24-424d-bcd9-c71a24f2ab8d)", boundByController: true
I0911 19:19:15.467954  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind
I0911 19:19:15.468092  111245 pv_controller.go:1146] deleteVolumeOperation [pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6] started
I0911 19:19:15.469471  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6: (865.294µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46164]
I0911 19:19:15.469737  111245 pv_controller.go:1250] isVolumeReleased[pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: volume is released
I0911 19:19:15.469797  111245 pv_controller.go:1285] doDeleteVolume [pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]
I0911 19:19:15.469842  111245 pv_controller.go:1316] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" deleted
I0911 19:19:15.469884  111245 pv_controller.go:1193] deleteVolumeOperation [pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6]: success
I0911 19:19:15.470318  111245 httplog.go:90] GET /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims/pvc-w-canbind: (2.15627ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.470589  111245 pv_controller.go:547] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind not found
I0911 19:19:15.470621  111245 pv_controller.go:575] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" is released and reclaim policy "Delete" will be executed
I0911 19:19:15.470634  111245 pv_controller.go:777] updating PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: set phase Released
I0911 19:19:15.470872  111245 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6: (841.983µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46164]
I0911 19:19:15.471097  111245 pv_controller.go:1200] failed to delete volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" from database: persistentvolumes "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" not found
I0911 19:19:15.472057  111245 store.go:228] deletion of /c7519225-3cdd-45ac-81fd-d821fbe2b7f3/persistentvolumes/pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d failed because of a conflict, going to retry
I0911 19:19:15.472413  111245 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d/status: (1.573187ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.472633  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" with version 59208
I0911 19:19:15.472662  111245 pv_controller.go:798] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" entered phase "Released"
I0911 19:19:15.472672  111245 pv_controller.go:1022] reclaimVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: policy is Delete
I0911 19:19:15.472690  111245 pv_controller.go:1631] scheduleOperation[delete-pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d[5ae5589c-1319-4290-8252-429590c5987a]]
I0911 19:19:15.472731  111245 pv_controller_base.go:212] volume "pvc-8679464e-c14a-455c-b46c-5a6fcbeb9ea6" deleted
I0911 19:19:15.472767  111245 pv_controller_base.go:212] volume "pv-w-canbind" deleted
I0911 19:19:15.472789  111245 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" with version 59208
I0911 19:19:15.472818  111245 pv_controller.go:489] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: phase: Released, bound to: "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind (uid: b2e0689b-9c24-424d-bcd9-c71a24f2ab8d)", boundByController: true
I0911 19:19:15.472840  111245 pv_controller.go:514] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: volume is bound to claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind
I0911 19:19:15.472855  111245 pv_controller.go:1146] deleteVolumeOperation [pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d] started
I0911 19:19:15.472861  111245 pv_controller.go:547] synchronizing PersistentVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: claim volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind not found
I0911 19:19:15.472874  111245 pv_controller.go:1022] reclaimVolume[pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d]: policy is Delete
I0911 19:19:15.472883  111245 pv_controller.go:1631] scheduleOperation[delete-pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d[5ae5589c-1319-4290-8252-429590c5987a]]
I0911 19:19:15.472887  111245 pv_controller_base.go:396] deletion of claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-canprovision" was already processed
I0911 19:19:15.472890  111245 pv_controller.go:1642] operation "delete-pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d[5ae5589c-1319-4290-8252-429590c5987a]" is already running, skipping
I0911 19:19:15.473526  111245 pv_controller_base.go:212] volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" deleted
I0911 19:19:15.473566  111245 pv_controller_base.go:396] deletion of claim "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind" was already processed
I0911 19:19:15.473528  111245 httplog.go:90] DELETE /api/v1/persistentvolumes: (7.870731ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46158]
I0911 19:19:15.473870  111245 httplog.go:90] GET /api/v1/persistentvolumes/pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d: (812.394µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.474064  111245 pv_controller.go:1153] error reading persistent volume "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d": persistentvolumes "pvc-b2e0689b-9c24-424d-bcd9-c71a24f2ab8d" not found
I0911 19:19:15.480177  111245 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.248923ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46158]
I0911 19:19:15.480321  111245 volume_binding_test.go:932] test cluster "volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a" start to tear down
I0911 19:19:15.481407  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pods: (865.591µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.482684  111245 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/persistentvolumeclaims: (927.861µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.483893  111245 httplog.go:90] DELETE /api/v1/persistentvolumes: (898.439µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.485034  111245 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (786.67µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.485522  111245 pv_controller_base.go:298] Shutting down persistent volume controller
I0911 19:19:15.485552  111245 pv_controller_base.go:409] claim worker queue shutting down
I0911 19:19:15.485554  111245 pv_controller_base.go:352] volume worker queue shutting down
I0911 19:19:15.485804  111245 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=58699&timeout=7m19s&timeoutSeconds=439&watch=true: (23.249749175s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45972]
I0911 19:19:15.485872  111245 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=58938&timeout=7m57s&timeoutSeconds=477&watch=true: (24.452351263s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45958]
I0911 19:19:15.485924  111245 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=58699&timeout=9m14s&timeoutSeconds=554&watch=true: (24.452657753s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45956]
I0911 19:19:15.485941  111245 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=58699&timeout=5m40s&timeoutSeconds=340&watch=true: (24.453068693s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45946]
I0911 19:19:15.485880  111245 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=58699&timeout=8m54s&timeoutSeconds=534&watch=true: (24.450336495s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45964]
I0911 19:19:15.485828  111245 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=58699&timeout=9m40s&timeoutSeconds=580&watch=true: (23.249649335s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45982]
I0911 19:19:15.486008  111245 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=58699&timeout=9m9s&timeoutSeconds=549&watch=true: (23.249887015s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45978]
I0911 19:19:15.485943  111245 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=58699&timeout=9m2s&timeoutSeconds=542&watch=true: (24.452933545s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45960]
I0911 19:19:15.485866  111245 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=58699&timeout=5m30s&timeoutSeconds=330&watch=true: (23.249721923s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45980]
I0911 19:19:15.486050  111245 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=58699&timeout=7m0s&timeoutSeconds=420&watch=true: (24.451267408s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45962]
I0911 19:19:15.485881  111245 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=58699&timeout=5m58s&timeoutSeconds=358&watch=true: (24.452411832s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45948]
I0911 19:19:15.485809  111245 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=58699&timeout=7m25s&timeoutSeconds=445&watch=true: (23.249751808s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45968]
I0911 19:19:15.486161  111245 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=58699&timeout=8m25s&timeoutSeconds=505&watch=true: (24.453287397s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45914]
I0911 19:19:15.486189  111245 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=58699&timeout=6m48s&timeoutSeconds=408&watch=true: (24.453026227s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45950]
I0911 19:19:15.486205  111245 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=58699&timeout=6m5s&timeoutSeconds=365&watch=true: (24.453306055s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45954]
I0911 19:19:15.486435  111245 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=58699&timeout=8m33s&timeoutSeconds=513&watch=true: (24.453840647s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45934]
I0911 19:19:15.489441  111245 httplog.go:90] DELETE /api/v1/nodes: (3.021204ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.489597  111245 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0911 19:19:15.490933  111245 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.139866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
I0911 19:19:15.492784  111245 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.440489ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46120]
W0911 19:19:15.493188  111245 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0911 19:19:15.493208  111245 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
--- FAIL: TestVolumeProvision (28.02s)
    volume_binding_test.go:1149: Provisoning annotaion on PVC volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind not bahaviors as expected: PVC volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pvc-w-canbind not expected to be provisioned, but found selected-node annotation
    volume_binding_test.go:1191: PV pv-w-canbind phase not Bound, got Available

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190911-190759.xml

Find volume-scheduling0f597299-97a6-4de8-bbec-113b126c837a/pod-i-pv-prebound-w-provisioned mentions in log files | View test history on testgrid


Show 2861 Passed Tests

Show 4 Skipped Tests