This job view page is being replaced by Spyglass soon. Check out the new job view.
PRwojtek-t: [WIP][DO NOT REVIEW] Deprecate SelfLink field
ResultFAILURE
Tests 6 failed / 2452 succeeded
Started2019-08-14 13:23
Elapsed28m57s
Revision
Buildergke-prow-ssd-pool-1a225945-cx5g
Refs master:34791349
80640:19423e8c
poda371cc03-be96-11e9-8926-5e2f786d826e
infra-commit6e5b38c23
poda371cc03-be96-11e9-8926-5e2f786d826e
repok8s.io/kubernetes
repo-commit9a46a200100c7fa6b462e509b4073b02c53d5443
repos{u'k8s.io/kubernetes': u'master:34791349d656a9f8e45b7093012e29ad08782ffa,80640:19423e8cd9dfa310ed15a6cbc2bf52b72901e611'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver/admissionwebhook TestWebhookAdmissionWithWatchCache 30s

go test -v k8s.io/kubernetes/test/integration/apiserver/admissionwebhook -run TestWebhookAdmissionWithWatchCache$
=== RUN   TestWebhookAdmissionWithWatchCache
I0814 13:42:18.903506  105698 serving.go:312] Generated self-signed cert (/tmp/kubernetes-kube-apiserver841054251/apiserver.crt, /tmp/kubernetes-kube-apiserver841054251/apiserver.key)
I0814 13:42:18.903647  105698 server.go:570] external host was not specified, using 127.0.0.1
W0814 13:42:18.903667  105698 authentication.go:416] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0814 13:42:20.113492  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.113765  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.113922  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.114271  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.115799  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.116039  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.116238  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.116399  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.116768  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.117113  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:42:20.117411  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 13:42:20.117635  105698 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0814 13:42:20.117734  105698 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0814 13:42:20.118871  105698 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0814 13:42:20.119073  105698 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0814 13:42:20.121498  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.121697  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.121884  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.123666  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.124196  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.124222  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.124264  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.124337  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.124407  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.124777  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 13:42:20.156572  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 13:42:20.158723  105698 master.go:234] Using reconciler: lease
I0814 13:42:20.159122  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.159391  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.159679  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.159910  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.161356  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.164204  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.164419  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.164606  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.164835  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.166105  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.173989  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.174025  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.174210  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.174326  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.177316  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.177340  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.177386  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.177467  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.177540  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.178390  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.178409  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.178445  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.178526  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.178792  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.179992  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.180018  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.180060  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.180124  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.180430  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.182538  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.182560  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.182629  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.182694  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.183179  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.186812  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.187075  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.187095  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.187132  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.187396  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.188849  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.188984  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.188997  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.189036  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.189156  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.189758  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.189774  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.189814  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.189872  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.190123  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.194739  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.195766  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.195791  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.199620  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.199787  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.204429  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.204447  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.204482  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.204521  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.204753  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.206279  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.207717  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.208139  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.208331  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.208534  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.209710  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.211822  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.212076  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.212259  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.212632  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.230954  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.231303  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.231321  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.231363  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.231870  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.233307  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.233324  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.233358  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.233397  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.233664  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.234002  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.234014  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.234049  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.234089  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.234153  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.234574  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.234624  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.234653  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.234692  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.234730  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.239383  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.387333  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.387370  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.387414  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.387490  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.396553  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.396886  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.396904  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.396943  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.396993  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.397573  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.397605  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.397642  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.397689  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.397954  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.398454  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.398468  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.398502  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.398574  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.398807  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.399255  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.399271  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.399301  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.399357  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.399540  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.400044  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.400060  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.400090  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.400133  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.400342  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.400758  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.406496  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.406519  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.406556  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.406809  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.412463  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.412497  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.412545  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.412616  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.412856  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.413374  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.413404  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.413436  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.413475  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.413712  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.414170  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.414197  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.414225  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.414265  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.414452  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.414886  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.414906  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.414932  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.414968  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.415133  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.415560  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.416266  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.425887  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.425966  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.426024  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.429298  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.429824  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.429840  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.429874  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.434717  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.435648  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.436174  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.436434  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.436656  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.439346  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.440251  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.440272  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.440306  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.440348  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.440610  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.441096  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.441112  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.441140  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.441181  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.441388  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.441824  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.441840  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.441868  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.441909  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.442121  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.442531  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.442546  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.442576  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.442634  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.442806  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.443151  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.443165  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.443195  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.443232  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.443402  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.449429  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.449637  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.449462  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.449807  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.450231  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.451326  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.451926  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.452232  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.452428  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.452857  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.454308  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.454955  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.455164  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.455353  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.455565  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.456354  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.456531  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.456630  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.456774  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.456944  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.457778  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.457990  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.458010  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.458050  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.458258  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.458579  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.458630  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.458644  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.458674  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.458728  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.459172  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.459192  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.459220  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.459258  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.459470  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.461615  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.461633  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.461677  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.461726  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.461884  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.463570  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.463758  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.463763  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.463806  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.463873  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.464209  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.464221  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.464253  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.464312  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.464433  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.465561  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.465578  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.465629  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.465669  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.465987  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.466415  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.467064  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.467080  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.467188  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.467235  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.498653  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.524995  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.525133  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.499832  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.530607  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.531743  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.531767  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.531818  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.531859  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.531944  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.532610  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.532634  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.532684  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.532746  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.533000  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.541265  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.544634  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.544670  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.544722  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.544812  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.545211  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.552335  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.552368  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.552419  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.552882  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.553466  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.553769  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.553794  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.553833  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.553915  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.555520  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.555551  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.555617  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.555697  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.555899  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.557170  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.557191  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.557272  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.557372  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.558314  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.566980  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.567009  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.567061  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.567162  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.567385  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.567735  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.568736  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.568756  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.568790  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.568941  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.570252  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.570267  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.570300  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.570334  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.570534  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.571502  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.571508  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.571522  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.571554  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.571888  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.573900  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.574143  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.574166  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.574199  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.574366  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.574885  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.576045  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.576196  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.576349  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.576668  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.577825  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.578260  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.578272  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.578294  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.578330  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.600772  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.600841  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.600969  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.601101  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.601319  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.602241  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.602263  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.602295  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.602339  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.602566  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.603309  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.603325  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.603355  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.603428  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.603784  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.604417  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.604433  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.604463  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.604502  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.604737  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.605264  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.605280  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.605309  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.605347  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.605653  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.606205  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.606347  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.606373  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.606403  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.606639  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.607183  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.607198  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.607229  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.607297  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.607495  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.608328  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.608344  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.608375  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.608416  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.608637  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.609324  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.609343  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.609373  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.609458  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.609697  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.610336  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.610353  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.610385  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.610427  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.610712  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.611085  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.616838  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.616861  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.616896  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.616982  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.617374  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.617666  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.617686  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.617716  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.617763  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.618533  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.619334  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.619348  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.619405  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.619805  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.626493  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.626510  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.626540  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.626635  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.626986  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.627440  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.627451  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.627476  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.627509  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.627715  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.628171  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.628183  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.628233  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.628268  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.628418  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.634389  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.634851  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.635060  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.635236  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.635608  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.636220  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.917040  105698 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0814 13:42:20.917082  105698 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
W0814 13:42:20.918739  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 13:42:20.918869  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.918887  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.918934  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.919006  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.919720  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:20.919856  105698 client.go:354] parsed scheme: ""
I0814 13:42:20.919876  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:20.919919  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:20.919978  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 13:42:20.922212  105698 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 13:42:20.923814  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:21.112821  105698 client.go:354] parsed scheme: ""
I0814 13:42:21.112851  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:21.112900  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:21.112983  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:21.113761  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:26.578417  105698 secure_serving.go:116] Serving securely on 127.0.0.1:33927
I0814 13:42:26.578687  105698 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0814 13:42:26.588562  105698 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0814 13:42:26.578796  105698 controller.go:81] Starting OpenAPI AggregationController
I0814 13:42:26.583923  105698 available_controller.go:383] Starting AvailableConditionController
I0814 13:42:26.589236  105698 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0814 13:42:26.583959  105698 autoregister_controller.go:140] Starting autoregister controller
I0814 13:42:26.589259  105698 cache.go:32] Waiting for caches to sync for autoregister controller
I0814 13:42:26.584048  105698 crdregistration_controller.go:112] Starting crd-autoregister controller
I0814 13:42:26.589284  105698 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
I0814 13:42:26.584204  105698 crd_finalizer.go:255] Starting CRDFinalizer
I0814 13:42:26.584219  105698 controller.go:83] Starting OpenAPI controller
I0814 13:42:26.584225  105698 customresource_discovery_controller.go:208] Starting DiscoveryController
I0814 13:42:26.584230  105698 naming_controller.go:288] Starting NamingConditionController
I0814 13:42:26.584235  105698 establishing_controller.go:73] Starting EstablishingController
I0814 13:42:26.584240  105698 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0814 13:42:26.584246  105698 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0814 13:42:26.588231  105698 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /48f338a2-7310-4ccb-b4f5-56e01ffbeb49/registry/masterleases/127.0.0.1, ResourceVersion: 0, AdditionalErrorMsg: 
I0814 13:42:26.689500  105698 cache.go:39] Caches are synced for AvailableConditionController controller
I0814 13:42:26.689549  105698 cache.go:39] Caches are synced for autoregister controller
I0814 13:42:26.689981  105698 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0814 13:42:26.730496  105698 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0814 13:42:27.578480  105698 controller.go:107] OpenAPI AggregationController: Processing item 
I0814 13:42:27.578515  105698 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0814 13:42:27.578537  105698 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0814 13:42:27.623968  105698 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 13:42:27.689603  105698 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 13:42:27.689787  105698 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
W0814 13:42:27.790963  105698 lease.go:223] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0814 13:42:27.792427  105698 controller.go:218] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
--- FAIL: TestWebhookAdmissionWithWatchCache (30.20s)
    testserver.go:147: runtime-config=map[api/all:true extensions/v1beta1/daemonsets:true extensions/v1beta1/deployments:true extensions/v1beta1/networkpolicies:true extensions/v1beta1/podsecuritypolicies:true extensions/v1beta1/replicasets:true]
    testserver.go:148: Starting kube-apiserver on port 33927...
    testserver.go:171: Waiting for /healthz to be ok...

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-134058.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/apiserver/admissionwebhook TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions 0.45s

go test -v k8s.io/kubernetes/test/integration/apiserver/admissionwebhook -run TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions$
=== RUN   TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions
    --- FAIL: TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions (0.45s)

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-134058.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/apiserver/admissionwebhook TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete 0.16s

go test -v k8s.io/kubernetes/test/integration/apiserver/admissionwebhook -run TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete$
=== RUN   TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete
I0814 13:42:34.543759  105698 client.go:354] parsed scheme: ""
I0814 13:42:34.543787  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:34.543825  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:34.544030  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:34.557211  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
        --- FAIL: TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete (0.16s)
            admission_test.go:679: waiting for schema.GroupVersionResource{Group:"apiextensions.k8s.io", Version:"v1beta1", Resource:"customresourcedefinitions"} to be deleted (name: openshiftwebconsoleconfigs.webconsole.operator.openshift.io, finalizers: [customresourcecleanup.apiextensions.k8s.io])...
            admission_test.go:705: CustomResourceDefinition.apiextensions.k8s.io "openshiftwebconsoleconfigs.webconsole.operator.openshift.io" is invalid: metadata.finalizers: Forbidden: no new finalizers can be added if the object is being deleted, found new finalizers []string{"test/k8s.io"}

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-134058.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/apiserver/admissionwebhook TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/deletecollection 0.20s

go test -v k8s.io/kubernetes/test/integration/apiserver/admissionwebhook -run TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/deletecollection$
=== RUN   TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/deletecollection
I0814 13:42:34.834535  105698 client.go:354] parsed scheme: ""
I0814 13:42:34.834822  105698 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:42:34.835030  105698 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:42:34.835189  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:42:34.836929  105698 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
        --- FAIL: TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/deletecollection (0.20s)
            admission_test.go:777: customresourcedefinitions.apiextensions.k8s.io "openshiftwebconsoleconfigs.webconsole.operator.openshift.io" not found
            admission_test.go:320: version: v1, phase:mutation, converted:false error: no request received
            admission_test.go:320: version: v1beta1, phase:mutation, converted:false error: no request received
            admission_test.go:320: version: v1, phase:validation, converted:true error: no request received
            admission_test.go:320: version: v1beta1, phase:validation, converted:true error: no request received
            admission_test.go:320: version: v1, phase:validation, converted:false error: no request received
            admission_test.go:320: version: v1beta1, phase:validation, converted:false error: no request received
            admission_test.go:320: version: v1, phase:mutation, converted:true error: no request received
            admission_test.go:320: version: v1beta1, phase:mutation, converted:true error: no request received

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-134058.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/master TestEmptyList 3.54s

go test -v k8s.io/kubernetes/test/integration/master -run TestEmptyList$
=== RUN   TestEmptyList
I0814 13:49:33.146750  108557 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 13:49:33.146878  108557 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 13:49:33.146975  108557 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 13:49:33.147062  108557 master.go:234] Using reconciler: 
I0814 13:49:33.148474  108557 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.148727  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.148837  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.148950  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.149154  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.149835  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.150015  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.150261  108557 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 13:49:33.150354  108557 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 13:49:33.150538  108557 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.151348  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.151555  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.151500  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.151798  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.151957  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.152463  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.152551  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.152714  108557 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 13:49:33.152754  108557 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.152830  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.152841  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.152872  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.152916  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.153010  108557 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 13:49:33.153250  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.153418  108557 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 13:49:33.153453  108557 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.153473  108557 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 13:49:33.153519  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.153530  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.153558  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.153678  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.153810  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.153945  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.154070  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.154109  108557 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 13:49:33.154191  108557 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 13:49:33.154292  108557 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.154361  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.154377  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.154408  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.154430  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.154456  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.154457  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.154794  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.154870  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.154976  108557 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 13:49:33.155087  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.155090  108557 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.155127  108557 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 13:49:33.155151  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.155162  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.155192  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.155298  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.155549  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.155663  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.155706  108557 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 13:49:33.155784  108557 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 13:49:33.155819  108557 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.155883  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.155898  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.155927  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.155964  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.156236  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.156280  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.156338  108557 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 13:49:33.156398  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.156498  108557 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 13:49:33.156488  108557 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.156617  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.156628  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.156655  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.156720  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.157002  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.157122  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.157184  108557 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 13:49:33.157263  108557 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 13:49:33.157296  108557 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.157357  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.157373  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.157396  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.157509  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.157747  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.157823  108557 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 13:49:33.157919  108557 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.157987  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.157999  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.158019  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.158035  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.158058  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.158082  108557 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 13:49:33.158339  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.158450  108557 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 13:49:33.158474  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.158521  108557 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 13:49:33.158724  108557 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.158802  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.158962  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.159209  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.159389  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.160330  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.160359  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.160388  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.160466  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.160501  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.160513  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.160562  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.160665  108557 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 13:49:33.160835  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.160902  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.160913  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.160929  108557 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 13:49:33.160942  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.160983  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.161249  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.161381  108557 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 13:49:33.161505  108557 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.161571  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.161601  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.161634  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.161683  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.161727  108557 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 13:49:33.161932  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.162361  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.162899  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.163202  108557 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 13:49:33.163483  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.163584  108557 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 13:49:33.163736  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.163853  108557 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.163970  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.164026  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.164079  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.164166  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.164542  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.164680  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.164743  108557 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 13:49:33.164776  108557 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.164800  108557 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 13:49:33.164866  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.164877  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.164970  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.164971  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.165031  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.165358  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.165449  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.165460  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.165509  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.165563  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.165698  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.166192  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.166233  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.166258  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.166376  108557 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.166432  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.166439  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.166460  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.166498  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.166799  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.166922  108557 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 13:49:33.167151  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.167297  108557 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 13:49:33.167375  108557 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.167517  108557 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.168103  108557 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.168706  108557 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.169049  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.169260  108557 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.169991  108557 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.170288  108557 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.170433  108557 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.170706  108557 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.171309  108557 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.171947  108557 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.172424  108557 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.173215  108557 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.173628  108557 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.174195  108557 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.174456  108557 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.175019  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.175334  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.175620  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.175945  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.176204  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.176396  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.176624  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.177414  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.177795  108557 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.178548  108557 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.179231  108557 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.179703  108557 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.180095  108557 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.180925  108557 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.181296  108557 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.182066  108557 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.183016  108557 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.183691  108557 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.184486  108557 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.185155  108557 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.185417  108557 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 13:49:33.185508  108557 master.go:434] Enabling API group "authentication.k8s.io".
I0814 13:49:33.185598  108557 master.go:434] Enabling API group "authorization.k8s.io".
I0814 13:49:33.185826  108557 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.185990  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.186208  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.187052  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.187238  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.187737  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.188116  108557 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 13:49:33.188378  108557 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.188640  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.188733  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.188825  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.188941  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.189047  108557 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 13:49:33.189288  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.189731  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.189854  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.190038  108557 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 13:49:33.190122  108557 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 13:49:33.190558  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.190814  108557 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.190998  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.191228  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.191334  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.191434  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.191929  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.192112  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.192015  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.192311  108557 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 13:49:33.192387  108557 master.go:434] Enabling API group "autoscaling".
I0814 13:49:33.192444  108557 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 13:49:33.192720  108557 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.192862  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.192933  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.193418  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.193482  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.193678  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.193945  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.194120  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.194910  108557 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 13:49:33.194974  108557 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 13:49:33.195204  108557 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.195259  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.195270  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.195289  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.195371  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.195617  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.195646  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.195672  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.196199  108557 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 13:49:33.196225  108557 master.go:434] Enabling API group "batch".
I0814 13:49:33.196319  108557 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.196373  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.196428  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.196492  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.196563  108557 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 13:49:33.196737  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.196994  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.197215  108557 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 13:49:33.197280  108557 master.go:434] Enabling API group "certificates.k8s.io".
I0814 13:49:33.197372  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.197450  108557 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 13:49:33.197608  108557 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.197865  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.197880  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.197904  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.198195  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.198349  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.198666  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.199039  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.198744  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.199681  108557 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 13:49:33.199884  108557 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.200014  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.200086  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.200164  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.199901  108557 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 13:49:33.200419  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.200799  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.200940  108557 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 13:49:33.200956  108557 master.go:434] Enabling API group "coordination.k8s.io".
I0814 13:49:33.201058  108557 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.201116  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.201127  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.201159  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.201190  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.201215  108557 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 13:49:33.201360  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.201739  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.201961  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.202003  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.202079  108557 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 13:49:33.202103  108557 master.go:434] Enabling API group "extensions".
I0814 13:49:33.202258  108557 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 13:49:33.202247  108557 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.202398  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.202405  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.202424  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.202481  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.202878  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.202904  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.203016  108557 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 13:49:33.203045  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.203097  108557 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 13:49:33.203180  108557 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.203261  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.203275  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.203307  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.203361  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.204031  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.204153  108557 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 13:49:33.204156  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.204169  108557 master.go:434] Enabling API group "networking.k8s.io".
I0814 13:49:33.204200  108557 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.204273  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.204283  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.204312  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.204358  108557 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 13:49:33.204513  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.204540  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.204679  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.204805  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.204838  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.204913  108557 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 13:49:33.204933  108557 master.go:434] Enabling API group "node.k8s.io".
I0814 13:49:33.204968  108557 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 13:49:33.205045  108557 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.205090  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.205097  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.205118  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.205158  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.205685  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.205785  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.205803  108557 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 13:49:33.205933  108557 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 13:49:33.205933  108557 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.206001  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.206011  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.206181  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.206223  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.206239  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.206310  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.206433  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.206474  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.206576  108557 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 13:49:33.206607  108557 master.go:434] Enabling API group "policy".
I0814 13:49:33.206643  108557 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.206701  108557 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 13:49:33.206707  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.206739  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.206774  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.206808  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.207393  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.207420  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.207522  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.207659  108557 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 13:49:33.207911  108557 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.208120  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.208230  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.207727  108557 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 13:49:33.208377  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.208199  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.208511  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.208892  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.208969  108557 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 13:49:33.208994  108557 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.209046  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.209066  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.209082  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.209085  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.209106  108557 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 13:49:33.209162  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.209776  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.210264  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.210475  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.210536  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.210564  108557 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 13:49:33.210673  108557 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 13:49:33.210722  108557 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.210771  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.210783  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.210814  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.210841  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.211092  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.211177  108557 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 13:49:33.211208  108557 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.211256  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.211267  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.211286  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.211313  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.211331  108557 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 13:49:33.211465  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.211735  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.211810  108557 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 13:49:33.211907  108557 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.211964  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.211974  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.211994  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.212032  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.212066  108557 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 13:49:33.212189  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.212425  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.212494  108557 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 13:49:33.212516  108557 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.212560  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.212567  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.212604  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.212639  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.212670  108557 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 13:49:33.212432  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.212806  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.212890  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.213150  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.213229  108557 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 13:49:33.213253  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.213307  108557 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 13:49:33.213337  108557 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.213384  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.213390  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.213409  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.213478  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.213791  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.213841  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.213894  108557 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 13:49:33.213917  108557 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 13:49:33.213968  108557 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 13:49:33.215392  108557 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.215518  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.215542  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.215566  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.215632  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.215785  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.215785  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.215859  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.216048  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.216050  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.216163  108557 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 13:49:33.216249  108557 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.216303  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.216303  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.216312  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.216334  108557 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 13:49:33.216360  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.216420  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.216842  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.216966  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.216982  108557 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 13:49:33.217000  108557 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 13:49:33.217020  108557 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 13:49:33.217104  108557 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 13:49:33.217250  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.217290  108557 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.217416  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.217437  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.217477  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.217528  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.217760  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.218292  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.218443  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.218663  108557 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 13:49:33.218713  108557 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 13:49:33.219012  108557 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.219261  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.219478  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.219626  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.219672  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.220193  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.220221  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.220250  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.220274  108557 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 13:49:33.220303  108557 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.220326  108557 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 13:49:33.220346  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.220353  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.220380  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.220468  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.221009  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.221660  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.222017  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.222125  108557 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 13:49:33.222147  108557 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.222188  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.222195  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.222224  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.222266  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.222317  108557 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 13:49:33.223087  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.223353  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.223402  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.223466  108557 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 13:49:33.223721  108557 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 13:49:33.224022  108557 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.224106  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.224118  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.224159  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.224226  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.224440  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.224730  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.224828  108557 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 13:49:33.224835  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.224889  108557 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 13:49:33.225131  108557 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.225209  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.225220  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.225320  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.225437  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.225819  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.225821  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.226156  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.226390  108557 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 13:49:33.226653  108557 master.go:434] Enabling API group "storage.k8s.io".
I0814 13:49:33.226543  108557 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 13:49:33.227244  108557 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.227614  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.227783  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.227956  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.227731  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.228160  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.229349  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.229380  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.229485  108557 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 13:49:33.229642  108557 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 13:49:33.229671  108557 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.229757  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.229785  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.229814  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.229931  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.230199  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.230362  108557 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 13:49:33.230430  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.230494  108557 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.230671  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.230710  108557 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 13:49:33.231142  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.231162  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.231214  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.231274  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.231501  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.232503  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.232707  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.232843  108557 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 13:49:33.232933  108557 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 13:49:33.232997  108557 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.233046  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.233055  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.233086  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.233128  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.233417  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.233524  108557 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 13:49:33.233750  108557 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 13:49:33.233902  108557 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.234013  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.234049  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.234121  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.234161  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.234279  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.234447  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.234720  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.234838  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.234991  108557 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 13:49:33.235100  108557 master.go:434] Enabling API group "apps".
I0814 13:49:33.235192  108557 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.235113  108557 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 13:49:33.235330  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.235342  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.235370  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.235417  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.235816  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.235875  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.235915  108557 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 13:49:33.235941  108557 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.235999  108557 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 13:49:33.236015  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.236024  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.236070  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.236115  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.236375  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.236496  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.236675  108557 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 13:49:33.236704  108557 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.236713  108557 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 13:49:33.236757  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.236766  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.236796  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.236843  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.237086  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.237170  108557 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 13:49:33.237194  108557 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.237258  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.237268  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.237292  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.237326  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.237347  108557 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 13:49:33.237526  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.237761  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.237834  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.237841  108557 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 13:49:33.237860  108557 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 13:49:33.237869  108557 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 13:49:33.237893  108557 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.238011  108557 client.go:354] parsed scheme: ""
I0814 13:49:33.238019  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:33.238038  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:33.238074  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.238303  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:33.238400  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:33.238420  108557 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 13:49:33.238565  108557 master.go:434] Enabling API group "events.k8s.io".
I0814 13:49:33.238442  108557 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 13:49:33.238848  108557 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.239191  108557 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.239322  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.239421  108557 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.239534  108557 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.239626  108557 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.239694  108557 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.239730  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.239845  108557 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.239957  108557 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.240026  108557 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.240254  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.240338  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.240343  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.240096  108557 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.240779  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.240900  108557 watch_cache.go:405] Replace watchCache (rev: 35114) 
I0814 13:49:33.241840  108557 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.242125  108557 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.243017  108557 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.243379  108557 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.244194  108557 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.244498  108557 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.245338  108557 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.245643  108557 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.246211  108557 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.246654  108557 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:33.246837  108557 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 13:49:33.247664  108557 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.247947  108557 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.248336  108557 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.249291  108557 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.250239  108557 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.251129  108557 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.251441  108557 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.252279  108557 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.253311  108557 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.253740  108557 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.254542  108557 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:33.254746  108557 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 13:49:33.255666  108557 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.256027  108557 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.256651  108557 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.257220  108557 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.257693  108557 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.258257  108557 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.258887  108557 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.259548  108557 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.259968  108557 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.260495  108557 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.261040  108557 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:33.261155  108557 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 13:49:33.261731  108557 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.262302  108557 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:33.262364  108557 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 13:49:33.262976  108557 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.263453  108557 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.263783  108557 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.264314  108557 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.264781  108557 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.265274  108557 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.265801  108557 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:33.265863  108557 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 13:49:33.266494  108557 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.267109  108557 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.267393  108557 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.268129  108557 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.268356  108557 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.268614  108557 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.269258  108557 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.269495  108557 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.269738  108557 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.270318  108557 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.270612  108557 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.270917  108557 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:33.270975  108557 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 13:49:33.270988  108557 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 13:49:33.271670  108557 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.272199  108557 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.272816  108557 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.273317  108557 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.273913  108557 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fb44a83a-cf13-4fd2-ab97-2ba687dbaf98", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:33.275929  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.275962  108557 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 13:49:33.275973  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.275984  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.275999  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.276007  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.276040  108557 httplog.go:90] GET /healthz: (209.065µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:33.277062  108557 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.1635ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:33.279394  108557 httplog.go:90] GET /api/v1/services: (1.036775ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:33.283189  108557 httplog.go:90] GET /api/v1/services: (974.947µs) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:33.285520  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.285549  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.285563  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.285575  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.285596  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.285627  108557 httplog.go:90] GET /healthz: (234.592µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:33.286392  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (808.375µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.287245  108557 httplog.go:90] GET /api/v1/services: (1.07268ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:33.289147  108557 httplog.go:90] GET /api/v1/services: (1.131346ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:33.289295  108557 httplog.go:90] POST /api/v1/namespaces: (2.280344ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.290413  108557 httplog.go:90] GET /api/v1/namespaces/kube-public: (732.543µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.291957  108557 httplog.go:90] POST /api/v1/namespaces: (1.167677ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.292966  108557 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (811.732µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.295946  108557 httplog.go:90] POST /api/v1/namespaces: (2.522751ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.376721  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.376966  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.377060  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.377152  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.377241  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.377463  108557 httplog.go:90] GET /healthz: (934.463µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:33.390418  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.390457  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.390467  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.390473  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.390480  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.390524  108557 httplog.go:90] GET /healthz: (227.464µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.476756  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.476787  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.476888  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.476908  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.476914  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.476947  108557 httplog.go:90] GET /healthz: (409.041µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:33.486281  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.486317  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.486326  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.486333  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.486338  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.486522  108557 httplog.go:90] GET /healthz: (354.046µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.576737  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.576773  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.576786  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.576795  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.576804  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.576842  108557 httplog.go:90] GET /healthz: (276.395µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:33.586481  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.586505  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.586515  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.586521  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.586526  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.586548  108557 httplog.go:90] GET /healthz: (151.092µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.676732  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.676818  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.676833  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.676843  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.676851  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.676881  108557 httplog.go:90] GET /healthz: (297.063µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:33.686367  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.686394  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.686405  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.686415  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.686422  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.686456  108557 httplog.go:90] GET /healthz: (201.519µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.776709  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.776744  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.776754  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.776760  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.776768  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.776793  108557 httplog.go:90] GET /healthz: (231.588µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:33.786547  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.786597  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.786611  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.786620  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.786628  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.786664  108557 httplog.go:90] GET /healthz: (289.48µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.876816  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.876857  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.876870  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.876880  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.876888  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.876933  108557 httplog.go:90] GET /healthz: (274.266µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:33.886162  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.886208  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.886221  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.886231  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.886238  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.886264  108557 httplog.go:90] GET /healthz: (222.988µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:33.976779  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.976814  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.976840  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.976850  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.976857  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.976895  108557 httplog.go:90] GET /healthz: (262.48µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:33.986209  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:33.986241  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:33.986253  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:33.986262  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:33.986270  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:33.986305  108557 httplog.go:90] GET /healthz: (204.637µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.076696  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:34.076729  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.076738  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:34.076745  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:34.076750  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:34.076779  108557 httplog.go:90] GET /healthz: (249.102µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:34.086282  108557 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:34.086315  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.086328  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:34.086344  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:34.086352  108557 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:34.086392  108557 httplog.go:90] GET /healthz: (235.348µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.146756  108557 client.go:354] parsed scheme: ""
I0814 13:49:34.146785  108557 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:34.146832  108557 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:34.146914  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:34.147349  108557 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:34.147393  108557 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:34.177526  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.177556  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:34.177569  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:34.177578  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:34.177675  108557 httplog.go:90] GET /healthz: (1.104422ms) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:34.187355  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.187376  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:34.187383  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:34.187389  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:34.187416  108557 httplog.go:90] GET /healthz: (1.202199ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.277311  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.163569ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.277386  108557 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.407672ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.278794  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.278814  108557 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:34.278822  108557 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:34.278827  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:34.278850  108557 httplog.go:90] GET /healthz: (1.344211ms) 0 [Go-http-client/1.1 127.0.0.1:34228]
I0814 13:49:34.279216  108557 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.509217ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.279242  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.513265ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0814 13:49:34.279374  108557 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 13:49:34.280394  108557 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (849.908µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.280493  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (905.764µs) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34228]
I0814 13:49:34.281848  108557 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.167164ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.281911  108557 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (4.188619ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.281990  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.173879ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34228]
I0814 13:49:34.282009  108557 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 13:49:34.282028  108557 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 13:49:34.283018  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (715.779µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.285637  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.315489ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.285690  108557 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (3.365233ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.290418  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (4.552486ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.290789  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.290830  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.290881  108557 httplog.go:90] GET /healthz: (4.852433ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.293845  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (2.366576ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.295869  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.749297ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.297940  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.713236ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.298934  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (765.19µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.301273  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.045274ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.301463  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 13:49:34.302611  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (920.624µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.304374  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.426847ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.304664  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 13:49:34.308381  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.059395ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.309798  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.142339ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.309962  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 13:49:34.311383  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.223131ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.313368  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.678766ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.313542  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 13:49:34.314475  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (805.925µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.315971  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.178318ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.316169  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 13:49:34.316939  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (655.466µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.318333  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.078222ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.318511  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 13:49:34.321635  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.918619ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.323739  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.710258ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.324215  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 13:49:34.325236  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (846.972µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.328674  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.996515ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.328936  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 13:49:34.330461  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (793.478µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.332299  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.310668ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.332784  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 13:49:34.333678  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (737.412µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.335421  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.327499ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.335661  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 13:49:34.336623  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (771.387µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.338843  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.806686ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.339065  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 13:49:34.340128  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (917.911µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.341793  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.311191ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.342058  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 13:49:34.342917  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (681.498µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.346125  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.657635ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.346315  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 13:49:34.347315  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (846.358µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.349622  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.797445ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.349931  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 13:49:34.351809  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.321325ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.353750  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.26032ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.354017  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 13:49:34.354979  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (773.019µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.356869  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.464932ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.357152  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 13:49:34.358094  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (825.561µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.360026  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.534125ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.360166  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 13:49:34.361093  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (821.11µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.362720  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.327271ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.362850  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 13:49:34.369686  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (6.659366ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.371896  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.72881ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.372956  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 13:49:34.373773  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (591.961µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.375025  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.016346ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.375282  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 13:49:34.376842  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.158672ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.381959  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.381984  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.382019  108557 httplog.go:90] GET /healthz: (5.315047ms) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:34.384444  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.228451ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.384759  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 13:49:34.390206  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.390226  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.390250  108557 httplog.go:90] GET /healthz: (4.136303ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.390255  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (4.826538ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.392857  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.115106ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.393469  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 13:49:34.395403  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.750891ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.396920  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.074837ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.397179  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 13:49:34.398083  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (727.597µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.400095  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.304348ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.400292  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 13:49:34.401561  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (790.073µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.403408  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.497202ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.403625  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 13:49:34.405709  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.122624ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.408503  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.920016ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.408720  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 13:49:34.409856  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (811.636µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.411737  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.580295ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.411976  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 13:49:34.412840  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (645.48µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.414537  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.226752ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.414831  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 13:49:34.415973  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (987.861µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.417468  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.100911ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.417610  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 13:49:34.418539  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (809.009µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.420161  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.164537ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.420319  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 13:49:34.421286  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (811.971µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.425655  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.887689ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.425915  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 13:49:34.426981  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (824.848µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.428530  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.148002ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.429091  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 13:49:34.430056  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (794.466µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.431750  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.306886ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.433841  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 13:49:34.434831  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (702.94µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.437165  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.848886ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.437405  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 13:49:34.438487  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (722.779µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.440504  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.555642ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.440915  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 13:49:34.442015  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (923.91µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.444022  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.662548ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.444229  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 13:49:34.446296  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.87995ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.448226  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.358015ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.448481  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 13:49:34.449641  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (895.2µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.451540  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507192ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.451793  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 13:49:34.453067  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.09237ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.454683  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.304125ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.454972  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 13:49:34.456010  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (840.186µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.457598  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.235752ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.457767  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 13:49:34.458728  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (796.012µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.460427  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.384659ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.460806  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 13:49:34.461844  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (837.529µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.463417  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.316762ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.463790  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 13:49:34.465337  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.3074ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.467303  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.441569ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.467464  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 13:49:34.468297  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (585.629µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.469978  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.181529ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.470162  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 13:49:34.471159  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (844.192µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.472763  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.20513ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.473077  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 13:49:34.474009  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (768.514µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.475634  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.232242ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.475849  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 13:49:34.476793  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (757.518µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.477157  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.477259  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.477428  108557 httplog.go:90] GET /healthz: (995.71µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:34.478655  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507793ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.478943  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 13:49:34.480173  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.000311ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.481715  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.187167ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.481896  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 13:49:34.482848  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (792.529µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.484990  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.84344ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.485317  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 13:49:34.486921  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.486972  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.487046  108557 httplog.go:90] GET /healthz: (970.041µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.487167  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.235492ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.488969  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395812ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.489316  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 13:49:34.490308  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (844.53µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.497949  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.298816ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.498124  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 13:49:34.517856  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.053659ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.538350  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.54103ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.538721  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 13:49:34.557777  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.051963ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.577983  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.334761ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.578031  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.578055  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.578088  108557 httplog.go:90] GET /healthz: (1.580935ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:34.578196  108557 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 13:49:34.586845  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.586987  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.587439  108557 httplog.go:90] GET /healthz: (1.368839ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.597668  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.013091ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.618330  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.652375ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.618544  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 13:49:34.637804  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.028246ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.658703  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.970322ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.658972  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 13:49:34.677664  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.677801  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.677848  108557 httplog.go:90] GET /healthz: (1.318708ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:34.678110  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.259464ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.687875  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.687899  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.687980  108557 httplog.go:90] GET /healthz: (1.956887ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.704446  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.723116ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.704913  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 13:49:34.717830  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (949.009µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.738540  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.726172ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.738763  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 13:49:34.757805  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.009978ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.777314  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.777349  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.777380  108557 httplog.go:90] GET /healthz: (882.118µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:34.777996  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.454537ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.778224  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 13:49:34.786866  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.786942  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.787170  108557 httplog.go:90] GET /healthz: (1.037034ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.797457  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (801.665µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.818345  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.655455ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.818700  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 13:49:34.837895  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.197675ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.858507  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.698491ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.858777  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 13:49:34.877482  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.877716  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.878035  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.353081ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.878029  108557 httplog.go:90] GET /healthz: (1.492719ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:34.887187  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.887212  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.887251  108557 httplog.go:90] GET /healthz: (1.133223ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.898217  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.482423ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.898412  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 13:49:34.917725  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.034903ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.938527  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.74865ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.938919  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 13:49:34.957706  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.012274ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.978314  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.548744ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:34.978479  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 13:49:34.982082  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.982106  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.982137  108557 httplog.go:90] GET /healthz: (5.552079ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:34.988029  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:34.988059  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:34.988093  108557 httplog.go:90] GET /healthz: (1.744313ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:34.998247  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.35938ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.018336  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.568675ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.018567  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 13:49:35.038013  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.220683ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.059003  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.211984ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.059303  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 13:49:35.077527  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.077560  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.077644  108557 httplog.go:90] GET /healthz: (1.054392ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:35.078067  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.48526ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.086905  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.086937  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.086963  108557 httplog.go:90] GET /healthz: (843.557µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.098301  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.631351ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.098497  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 13:49:35.118224  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.42066ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.138333  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.589198ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.138714  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 13:49:35.157903  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.137185ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.177536  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.177568  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.177673  108557 httplog.go:90] GET /healthz: (1.090335ms) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:35.179677  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.879366ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.180130  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 13:49:35.189243  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.189278  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.189324  108557 httplog.go:90] GET /healthz: (1.380157ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.197572  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (881.375µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.218779  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.975255ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.219095  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 13:49:35.237945  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.174387ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.258420  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.666412ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.258769  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 13:49:35.277876  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.277967  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.278015  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.169536ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.278044  108557 httplog.go:90] GET /healthz: (1.47058ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:35.287093  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.287120  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.287155  108557 httplog.go:90] GET /healthz: (1.12511ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.298505  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.845251ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.298757  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 13:49:35.317859  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (968.187µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.338389  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.702095ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.339022  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 13:49:35.357888  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.157194ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.377456  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.377641  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.377905  108557 httplog.go:90] GET /healthz: (1.312825ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:35.378327  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.61381ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.378714  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 13:49:35.388836  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.388941  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.388980  108557 httplog.go:90] GET /healthz: (1.258359ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.397700  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (922.858µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.418210  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.520424ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.418432  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 13:49:35.438309  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.616982ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.458739  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.922538ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.459044  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 13:49:35.477768  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.477938  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.478266  108557 httplog.go:90] GET /healthz: (1.55501ms) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:35.477898  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.117881ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.487057  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.487087  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.487126  108557 httplog.go:90] GET /healthz: (997.451µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.498299  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.306373ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.498523  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 13:49:35.518186  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.290459ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.538951  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.144944ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.539272  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 13:49:35.558455  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.644462ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.577690  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.577722  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.577761  108557 httplog.go:90] GET /healthz: (1.08124ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:35.578455  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.613242ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.578744  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 13:49:35.590290  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.590320  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.590355  108557 httplog.go:90] GET /healthz: (804.125µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.597625  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (873.472µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.618427  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.571953ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.618713  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 13:49:35.637884  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.043136ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.658998  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.142034ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.659255  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 13:49:35.677488  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.677517  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.677550  108557 httplog.go:90] GET /healthz: (987.328µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:35.677865  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.279429ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.688630  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.688661  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.688702  108557 httplog.go:90] GET /healthz: (2.533307ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.701230  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.394071ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.701717  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 13:49:35.718260  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.352974ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.738753  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.976159ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.739093  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 13:49:35.757711  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.013887ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.777552  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.777647  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.777691  108557 httplog.go:90] GET /healthz: (1.025815ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:35.778724  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.709741ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.779047  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 13:49:35.786724  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.786752  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.786888  108557 httplog.go:90] GET /healthz: (798.336µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.797545  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (887.666µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.818834  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.087751ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.819101  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 13:49:35.838382  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.527245ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.858460  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.598498ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.858760  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 13:49:35.878367  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.878401  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.878442  108557 httplog.go:90] GET /healthz: (1.792805ms) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:35.878966  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.311856ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.887152  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.887302  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.887523  108557 httplog.go:90] GET /healthz: (1.19194ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.898220  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.468716ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.898567  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 13:49:35.917766  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.045761ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.938811  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.874203ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.939053  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 13:49:35.957793  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.046847ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:35.977667  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.977933  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.978780  108557 httplog.go:90] GET /healthz: (2.096093ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:35.978434  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.65066ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.979269  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 13:49:35.987361  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:35.987394  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:35.987437  108557 httplog.go:90] GET /healthz: (1.254369ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:35.997505  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (817.942µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.018336  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.555458ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.018845  108557 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 13:49:36.038333  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.495666ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.039982  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.238671ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.058288  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.537827ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.058512  108557 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 13:49:36.077389  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.077423  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.077463  108557 httplog.go:90] GET /healthz: (853.512µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:36.077806  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.135343ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.079188  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (994.011µs) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.086739  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.086842  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.086948  108557 httplog.go:90] GET /healthz: (898.922µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.098245  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.444009ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.098953  108557 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 13:49:36.118036  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.230127ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.119749  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.292753ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.138673  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.90664ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.138890  108557 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 13:49:36.158066  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.235766ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.159667  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.241328ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.177506  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.177543  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.177638  108557 httplog.go:90] GET /healthz: (1.031083ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:36.178811  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.138766ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.179013  108557 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 13:49:36.186785  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.186864  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.186905  108557 httplog.go:90] GET /healthz: (824.75µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.197776  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.029823ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.199172  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (919.168µs) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.218480  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.682732ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.218745  108557 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 13:49:36.237914  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.150569ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.239396  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.090267ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.258388  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.597891ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.258724  108557 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 13:49:36.277579  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.277670  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.277698  108557 httplog.go:90] GET /healthz: (1.183042ms) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:36.277871  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.18114ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.279335  108557 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.15962ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.286896  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.286924  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.286961  108557 httplog.go:90] GET /healthz: (894.639µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.298179  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.456931ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.298411  108557 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 13:49:36.317860  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.061548ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.319431  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.179551ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.338546  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.70078ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.338983  108557 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 13:49:36.357985  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.103709ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.359716  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.248621ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.377849  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.378116  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.378400  108557 httplog.go:90] GET /healthz: (1.6799ms) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:36.378696  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.959494ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.378930  108557 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 13:49:36.386727  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.386758  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.386799  108557 httplog.go:90] GET /healthz: (756.831µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.397808  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.124695ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.399342  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.119244ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.418361  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.611352ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.418721  108557 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 13:49:36.437827  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.055784ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.439171  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (933.331µs) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.458480  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.67713ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.458737  108557 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 13:49:36.477698  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.477737  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.477783  108557 httplog.go:90] GET /healthz: (998.642µs) 0 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:36.478418  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.470389ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.480174  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.230372ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.486860  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.486890  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.486923  108557 httplog.go:90] GET /healthz: (779.788µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.498648  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.951339ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.498998  108557 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 13:49:36.518350  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.481186ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.520050  108557 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.115459ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.538477  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.462896ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.538675  108557 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 13:49:36.557776  108557 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.01917ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.559429  108557 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.180206ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0814 13:49:36.577273  108557 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:36.577300  108557 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:36.577342  108557 httplog.go:90] GET /healthz: (825.817µs) 0 [Go-http-client/1.1 127.0.0.1:34196]
I0814 13:49:36.578527  108557 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.833885ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.578832  108557 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 13:49:36.586760  108557 httplog.go:90] GET /healthz: (675.198µs) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.588016  108557 httplog.go:90] GET /api/v1/namespaces/default: (982.468µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.589887  108557 httplog.go:90] POST /api/v1/namespaces: (1.537539ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.591241  108557 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (947.212µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.594803  108557 httplog.go:90] POST /api/v1/namespaces/default/services: (3.157307ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.596122  108557 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (876.596µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0814 13:49:36.596965  108557 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (496.926µs) 422 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
E0814 13:49:36.597294  108557 controller.go:218] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0814 13:49:36.677372  108557 httplog.go:90] GET /healthz: (822.255µs) 200 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:36.679061  108557 httplog.go:90] GET /api/v1/namespaces/default/pods: (1.301588ms) 200 [Go-http-client/1.1 127.0.0.1:34194]
I0814 13:49:36.679288  108557 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 13:49:36.680543  108557 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (877.122µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
--- FAIL: TestEmptyList (3.54s)
    synthetic_master_test.go:139: body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"35536"},"items":null}
    synthetic_master_test.go:140: nil items field from empty list (all lists should return non-nil empty items lists)

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-134058.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0814 13:48:06.164885  110052 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 13:48:06.164908  110052 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 13:48:06.164920  110052 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 13:48:06.164930  110052 master.go:234] Using reconciler: 
I0814 13:48:06.167311  110052 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.167424  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.167435  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.167475  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.167546  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.167852  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.167969  110052 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 13:48:06.168182  110052 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.168262  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.168377  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.168387  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.168421  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.168476  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.168603  110052 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 13:48:06.168792  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.168851  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.168958  110052 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 13:48:06.168988  110052 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.169043  110052 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 13:48:06.169056  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.169066  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.169094  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.169145  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.172161  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.173147  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.173158  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.173239  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.173573  110052 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 13:48:06.173693  110052 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 13:48:06.173680  110052 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.173863  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.173872  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.173901  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.173944  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.174350  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.174466  110052 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 13:48:06.174505  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.174550  110052 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 13:48:06.174673  110052 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.174737  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.174747  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.174778  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.174819  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.175143  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.175226  110052 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 13:48:06.175256  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.175292  110052 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 13:48:06.175366  110052 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.175422  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.175431  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.175458  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.175499  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.175802  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.175898  110052 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 13:48:06.175949  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.176023  110052 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 13:48:06.176020  110052 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.176096  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.176106  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.176134  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.176228  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.176557  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.176680  110052 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 13:48:06.176822  110052 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.176892  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.176902  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.176933  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.176972  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.177001  110052 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 13:48:06.177259  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.178401  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.179667  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.179707  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.179726  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.180292  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.180344  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.180429  110052 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 13:48:06.180491  110052 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 13:48:06.180574  110052 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.180726  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.180736  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.180767  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.180878  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.181137  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.181169  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.181226  110052 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 13:48:06.181350  110052 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.181378  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.181403  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.181412  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.181437  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.181489  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.181513  110052 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 13:48:06.181793  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.181872  110052 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 13:48:06.181909  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.181982  110052 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.182048  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.182057  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.182083  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.182122  110052 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 13:48:06.182228  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.182452  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.182509  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.182534  110052 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 13:48:06.182542  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.182597  110052 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 13:48:06.182691  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.182743  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.182752  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.182795  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.182844  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.183161  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.183255  110052 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 13:48:06.183370  110052 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.183419  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.183428  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.183453  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.183487  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.183526  110052 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 13:48:06.183733  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.183856  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.183956  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.184425  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.184510  110052 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 13:48:06.184691  110052 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.184762  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.184773  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.184802  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.184852  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.184881  110052 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 13:48:06.185177  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.185517  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.185540  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.185841  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.185890  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.185933  110052 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 13:48:06.185960  110052 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.186006  110052 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 13:48:06.186052  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.186062  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.186089  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.186188  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.186384  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.186454  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.186463  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.186486  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.186521  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.186564  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.186769  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.186842  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.186900  110052 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.186969  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.186978  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.187005  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.187051  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.189091  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.189424  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.189522  110052 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 13:48:06.190053  110052 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.190215  110052 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.190244  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.190665  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.190890  110052 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 13:48:06.190906  110052 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.191446  110052 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.192012  110052 watch_cache.go:405] Replace watchCache (rev: 27251) 
I0814 13:48:06.192147  110052 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.193817  110052 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.194358  110052 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.194719  110052 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.195057  110052 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.195671  110052 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.196557  110052 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.196959  110052 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.197864  110052 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.198093  110052 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.198827  110052 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.199034  110052 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.199650  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.199854  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.199981  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.200081  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.200220  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.200330  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.200471  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.201273  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.201641  110052 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.202479  110052 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.203512  110052 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.203838  110052 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.204081  110052 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.204832  110052 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.205068  110052 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.205766  110052 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.206521  110052 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.207099  110052 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.207997  110052 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.208269  110052 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.208372  110052 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 13:48:06.208394  110052 master.go:434] Enabling API group "authentication.k8s.io".
I0814 13:48:06.208409  110052 master.go:434] Enabling API group "authorization.k8s.io".
I0814 13:48:06.208557  110052 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.208694  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.208832  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.208956  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.209082  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.209499  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.209642  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.209990  110052 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 13:48:06.210143  110052 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 13:48:06.210709  110052 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.210901  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.211052  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.211174  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.211327  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.211780  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.212426  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.212552  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.212815  110052 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 13:48:06.212934  110052 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 13:48:06.213409  110052 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.213549  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.213623  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.213686  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.213763  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.214058  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.214151  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.215450  110052 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 13:48:06.215476  110052 master.go:434] Enabling API group "autoscaling".
I0814 13:48:06.215722  110052 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.215789  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.215798  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.215830  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.215812  110052 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 13:48:06.215974  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.217704  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.218077  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.218143  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.218239  110052 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 13:48:06.218307  110052 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 13:48:06.218382  110052 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.218448  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.218465  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.218500  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.218559  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.218855  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.217768  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.218886  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.219014  110052 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 13:48:06.219038  110052 master.go:434] Enabling API group "batch".
I0814 13:48:06.219101  110052 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 13:48:06.219183  110052 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.219246  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.219255  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.219287  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.219333  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.220432  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.220483  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.220535  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.220765  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.220965  110052 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 13:48:06.220987  110052 master.go:434] Enabling API group "certificates.k8s.io".
I0814 13:48:06.221111  110052 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.221173  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.221184  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.221216  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.221260  110052 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 13:48:06.221272  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.221491  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.221577  110052 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 13:48:06.221728  110052 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.221767  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.221787  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.221797  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.221817  110052 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 13:48:06.221827  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.221870  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.222072  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.222144  110052 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 13:48:06.222155  110052 master.go:434] Enabling API group "coordination.k8s.io".
I0814 13:48:06.222281  110052 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.222336  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.222345  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.222373  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.222413  110052 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 13:48:06.222344  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.222503  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.222842  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.222927  110052 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 13:48:06.222959  110052 master.go:434] Enabling API group "extensions".
I0814 13:48:06.223028  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.223083  110052 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.223103  110052 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 13:48:06.223154  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.223164  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.223193  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.223335  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.223680  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.223760  110052 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 13:48:06.223775  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.223832  110052 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 13:48:06.223876  110052 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.223935  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.223945  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.223988  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.224050  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.224253  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.224336  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.224356  110052 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 13:48:06.224370  110052 master.go:434] Enabling API group "networking.k8s.io".
I0814 13:48:06.224399  110052 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.224459  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.224468  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.224497  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.224541  110052 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 13:48:06.224755  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.225030  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.225408  110052 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 13:48:06.225426  110052 master.go:434] Enabling API group "node.k8s.io".
I0814 13:48:06.225725  110052 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.225789  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.225812  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.225822  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.225876  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.225931  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.225970  110052 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 13:48:06.226015  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.226398  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.226845  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.227020  110052 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 13:48:06.227157  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.227159  110052 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.227196  110052 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 13:48:06.227271  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.227283  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.227312  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.227369  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.227614  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.227642  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.227757  110052 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 13:48:06.227782  110052 master.go:434] Enabling API group "policy".
I0814 13:48:06.227831  110052 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.227984  110052 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 13:48:06.228027  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.228075  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.228114  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.228178  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.228611  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.228930  110052 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 13:48:06.229070  110052 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.229239  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.229249  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.229280  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.229331  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.229359  110052 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 13:48:06.229506  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.232112  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.232131  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.232249  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.232292  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.232316  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.232361  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.232725  110052 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 13:48:06.232782  110052 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.232854  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.232866  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.232901  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.232853  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.232901  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.232998  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.233006  110052 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 13:48:06.233258  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.233296  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.233340  110052 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 13:48:06.233372  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.233423  110052 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 13:48:06.233460  110052 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.233521  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.233530  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.233567  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.233632  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.233994  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.234332  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.234562  110052 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 13:48:06.234650  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.234804  110052 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 13:48:06.234624  110052 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.236449  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.236460  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.236492  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.236552  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.238540  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.238647  110052 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 13:48:06.238764  110052 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.238817  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.238826  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.238843  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.238910  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.238922  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.238941  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.238991  110052 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 13:48:06.239205  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.239628  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.239732  110052 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 13:48:06.239732  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.239758  110052 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.239807  110052 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 13:48:06.239821  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.239830  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.239857  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.239963  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.240189  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.240264  110052 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 13:48:06.240388  110052 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.240443  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.240453  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.240484  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.240524  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.240594  110052 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 13:48:06.240819  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.240918  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.241057  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.241130  110052 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 13:48:06.241156  110052 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 13:48:06.241808  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.241975  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.242154  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.242212  110052 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 13:48:06.243040  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.243274  110052 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.243349  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.243361  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.243394  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.243453  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.243878  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.244046  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.244054  110052 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 13:48:06.244129  110052 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 13:48:06.244529  110052 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.244765  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.244868  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.244952  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.245076  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.245853  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.246246  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.246387  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.246450  110052 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 13:48:06.246610  110052 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 13:48:06.246792  110052 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 13:48:06.246472  110052 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 13:48:06.247152  110052 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.247284  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.247345  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.247430  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.247537  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.247707  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.248127  110052 watch_cache.go:405] Replace watchCache (rev: 27252) 
I0814 13:48:06.248631  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.248685  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.248763  110052 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 13:48:06.248911  110052 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 13:48:06.248913  110052 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.248976  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.248984  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.249010  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.249130  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.249394  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.249648  110052 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 13:48:06.249737  110052 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.249889  110052 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 13:48:06.249896  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.250514  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.250775  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.250927  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.251032  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.251074  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.251389  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.251865  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.251919  110052 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 13:48:06.251981  110052 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 13:48:06.252145  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.252682  110052 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.252847  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.252891  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.252950  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.253041  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.253434  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.253537  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.253875  110052 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 13:48:06.254024  110052 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.254101  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.254112  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.254143  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.254192  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.254609  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.254676  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.254849  110052 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 13:48:06.254927  110052 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 13:48:06.255037  110052 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.255096  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.255106  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.255136  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.255176  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.255639  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.255707  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.255756  110052 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 13:48:06.255773  110052 master.go:434] Enabling API group "storage.k8s.io".
I0814 13:48:06.255898  110052 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.255958  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.256002  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.256013  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.256046  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.256084  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.256115  110052 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 13:48:06.256310  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.256562  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.256721  110052 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 13:48:06.256863  110052 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.256929  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.256945  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.256976  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.257020  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.257063  110052 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 13:48:06.257257  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.254095  110052 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 13:48:06.260939  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.261401  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.261785  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.261926  110052 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 13:48:06.262067  110052 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.262178  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.262188  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.262217  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.262261  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.262290  110052 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 13:48:06.262479  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.262831  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.262882  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.262959  110052 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 13:48:06.263058  110052 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 13:48:06.263093  110052 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.263648  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.263665  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.263732  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.263792  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.264173  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.264220  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.264285  110052 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 13:48:06.264325  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.264411  110052 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 13:48:06.264402  110052 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.264578  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.264670  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.264700  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.264738  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.264984  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.265096  110052 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 13:48:06.265112  110052 master.go:434] Enabling API group "apps".
I0814 13:48:06.265115  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.265139  110052 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.265172  110052 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 13:48:06.265193  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.265203  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.265368  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.265429  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.265692  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.265696  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.265781  110052 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 13:48:06.265783  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.265861  110052 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.265924  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.265933  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.265965  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.266000  110052 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 13:48:06.266163  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.266390  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.266491  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.266492  110052 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 13:48:06.266510  110052 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 13:48:06.266537  110052 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.266679  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.266691  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.266718  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.266763  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.266981  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.267057  110052 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 13:48:06.267078  110052 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.267125  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.267134  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.267158  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.267201  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.267225  110052 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 13:48:06.267405  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.267754  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.267816  110052 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 13:48:06.267827  110052 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 13:48:06.267852  110052 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.267996  110052 client.go:354] parsed scheme: ""
I0814 13:48:06.268006  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:06.268030  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:06.268108  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.268187  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.268210  110052 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 13:48:06.268333  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.268374  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.268485  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.268762  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:06.268826  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:06.268857  110052 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 13:48:06.268872  110052 master.go:434] Enabling API group "events.k8s.io".
I0814 13:48:06.268910  110052 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 13:48:06.269086  110052 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.269258  110052 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.269514  110052 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.269631  110052 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.269732  110052 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.269820  110052 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.269989  110052 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.270085  110052 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.270179  110052 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.270272  110052 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.271103  110052 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.271400  110052 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.272168  110052 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.272441  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.272537  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.272617  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.272667  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.272917  110052 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.273163  110052 watch_cache.go:405] Replace watchCache (rev: 27253) 
I0814 13:48:06.273769  110052 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.274007  110052 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.274986  110052 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.275228  110052 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.275900  110052 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.276144  110052 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:48:06.276192  110052 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 13:48:06.276838  110052 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.276950  110052 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.277149  110052 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.277947  110052 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.278700  110052 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.279733  110052 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.280031  110052 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.280896  110052 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.281734  110052 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.282301  110052 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.283208  110052 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:48:06.283295  110052 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 13:48:06.284243  110052 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.284608  110052 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.285200  110052 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.286223  110052 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.287104  110052 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.288010  110052 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.288913  110052 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.289751  110052 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.290846  110052 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.291893  110052 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.292775  110052 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:48:06.292986  110052 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 13:48:06.293797  110052 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.294819  110052 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:48:06.295015  110052 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 13:48:06.295940  110052 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.296866  110052 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.297294  110052 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.298103  110052 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.298758  110052 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.299450  110052 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.300335  110052 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:48:06.300523  110052 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 13:48:06.301576  110052 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.302526  110052 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.302974  110052 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.304111  110052 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.304617  110052 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.305098  110052 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.306022  110052 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.306436  110052 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.306882  110052 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.308116  110052 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.308526  110052 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.308963  110052 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:48:06.309214  110052 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 13:48:06.309318  110052 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 13:48:06.310208  110052 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.311132  110052 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.312182  110052 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.312978  110052 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.313992  110052 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9886d102-3573-4411-9500-b813dde0658d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:48:06.317203  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.317238  110052 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 13:48:06.317249  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.317260  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.317269  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.317277  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.317306  110052 httplog.go:90] GET /healthz: (258.774µs) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:06.319891  110052 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.029487ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:06.322773  110052 httplog.go:90] GET /api/v1/services: (1.387937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:06.326316  110052 httplog.go:90] GET /api/v1/services: (1.021484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:06.328558  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.328763  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.328859  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.328941  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.329021  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.329867  110052 httplog.go:90] GET /healthz: (1.458364ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:06.331193  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.852217ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40960]
I0814 13:48:06.332821  110052 httplog.go:90] GET /api/v1/services: (2.950267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:06.333182  110052 httplog.go:90] GET /api/v1/services: (3.3162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40962]
I0814 13:48:06.333675  110052 httplog.go:90] POST /api/v1/namespaces: (1.760667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40960]
I0814 13:48:06.335699  110052 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.395796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:06.337773  110052 httplog.go:90] POST /api/v1/namespaces: (1.792997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:06.339468  110052 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.141954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:06.341189  110052 httplog.go:90] POST /api/v1/namespaces: (1.202978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:06.418381  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.418519  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.418602  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.418653  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.418704  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.418852  110052 httplog.go:90] GET /healthz: (621.002µs) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:06.430896  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.431077  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.431146  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.431185  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.431219  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.431356  110052 httplog.go:90] GET /healthz: (605.501µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:06.518518  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.518563  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.518575  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.518650  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.518661  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.518696  110052 httplog.go:90] GET /healthz: (405.355µs) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:06.530818  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.530888  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.530902  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.530912  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.530921  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.530952  110052 httplog.go:90] GET /healthz: (306.115µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
E0814 13:48:06.558478  110052 factory.go:599] Error getting pod permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/test-pod for retry: Get http://127.0.0.1:35285/api/v1/namespaces/permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/pods/test-pod: dial tcp 127.0.0.1:35285: connect: connection refused; retrying...
I0814 13:48:06.618402  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.618440  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.618453  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.618462  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.618469  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.618508  110052 httplog.go:90] GET /healthz: (248.176µs) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:06.630776  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.630817  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.630831  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.630841  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.630849  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.630882  110052 httplog.go:90] GET /healthz: (250.908µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:06.718327  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.718418  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.718434  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.718445  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.718453  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.718494  110052 httplog.go:90] GET /healthz: (298.192µs) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:06.730849  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.730883  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.730896  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.730906  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.730914  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.730953  110052 httplog.go:90] GET /healthz: (254.213µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:06.818350  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.818389  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.818403  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.818412  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.818426  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.818456  110052 httplog.go:90] GET /healthz: (244.957µs) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:06.830993  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.831040  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.831054  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.831064  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.831072  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.831110  110052 httplog.go:90] GET /healthz: (265.159µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:06.918382  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.918416  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.918428  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.918439  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.918447  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.918474  110052 httplog.go:90] GET /healthz: (235.185µs) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:06.930882  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:06.930923  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:06.930951  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:06.930961  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:06.930970  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:06.931081  110052 httplog.go:90] GET /healthz: (299.137µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.018538  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:07.018605  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.018619  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:07.018628  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:07.018636  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:07.018792  110052 httplog.go:90] GET /healthz: (445.68µs) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:07.030783  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:07.030824  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.030844  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:07.030854  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:07.030862  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:07.030893  110052 httplog.go:90] GET /healthz: (263.089µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.119136  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:07.119169  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.119182  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:07.119192  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:07.119201  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:07.119238  110052 httplog.go:90] GET /healthz: (255.57µs) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:07.131153  110052 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:48:07.131188  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.131201  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:07.131211  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:07.131219  110052 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:07.131267  110052 httplog.go:90] GET /healthz: (256.004µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.165089  110052 client.go:354] parsed scheme: ""
I0814 13:48:07.165123  110052 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:48:07.165168  110052 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:48:07.165235  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:07.165922  110052 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:48:07.165998  110052 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:48:07.219325  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.219359  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:07.219370  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:07.219394  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:07.219442  110052 httplog.go:90] GET /healthz: (1.152411ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:07.231658  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.231683  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:07.231690  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:07.231697  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:07.231726  110052 httplog.go:90] GET /healthz: (1.139974ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.319891  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.532455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40974]
I0814 13:48:07.319913  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.922467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.320113  110052 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.436477ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.321954  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.321998  110052 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:48:07.322016  110052 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:48:07.322024  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:48:07.322070  110052 httplog.go:90] GET /healthz: (2.64133ms) 0 [Go-http-client/1.1 127.0.0.1:40976]
I0814 13:48:07.322073  110052 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.293518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.322187  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.82036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.322260  110052 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.856986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40974]
I0814 13:48:07.322368  110052 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 13:48:07.324189  110052 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.616449ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.324274  110052 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.5075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40974]
I0814 13:48:07.324394  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.889043ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.325413  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (675.284µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40974]
I0814 13:48:07.326161  110052 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.285043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.326541  110052 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 13:48:07.326555  110052 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 13:48:07.327308  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.056748ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40974]
I0814 13:48:07.329212  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.050547ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.330509  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (914.807µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.331709  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.331738  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.331768  110052 httplog.go:90] GET /healthz: (1.272983ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.332962  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.080234ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.335307  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.333843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.337165  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.454981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.340022  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.438696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.340225  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 13:48:07.341332  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (938.577µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.343257  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.589563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.343672  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 13:48:07.344786  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (917.143µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.347043  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.581055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.347483  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 13:48:07.349008  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.012615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.351448  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.905046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.351957  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 13:48:07.353469  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.096367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.356018  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.95165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.356485  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 13:48:07.357837  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (866.862µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.359605  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.384036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.359978  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 13:48:07.361086  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (930.281µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.362894  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.433259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.363120  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 13:48:07.364082  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (812.096µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.366113  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.626494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.366504  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 13:48:07.367687  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (872.135µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.369735  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.724368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.369978  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 13:48:07.371068  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (766.134µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.373538  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.989384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.373782  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 13:48:07.374823  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (844.533µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.376565  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.358598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.376790  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 13:48:07.378210  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (880.798µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.380452  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.690555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.380787  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 13:48:07.382487  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.208997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.384203  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.318948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.384451  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 13:48:07.385646  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (820.693µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.387342  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.387525  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 13:48:07.388684  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (909.329µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.392043  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.93835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.392353  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 13:48:07.393278  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (711.796µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.394941  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.174602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.395225  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 13:48:07.398739  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (3.304449ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.400722  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.594657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.401106  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 13:48:07.401992  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (684.902µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.403979  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.563118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.404217  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 13:48:07.405144  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (676.809µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.406819  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.14928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.407182  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 13:48:07.408485  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.096524ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.410500  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.532166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.410786  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 13:48:07.411727  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (756.117µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.413948  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.882767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.414202  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 13:48:07.415287  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (830.137µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.419038  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.284224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.419471  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 13:48:07.420446  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (725.243µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.422207  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.378263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.422454  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 13:48:07.423254  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.423277  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.423305  110052 httplog.go:90] GET /healthz: (3.929363ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:07.424047  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.363494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.425439  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.077564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.425624  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 13:48:07.427010  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.148716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.428812  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.401883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.429041  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 13:48:07.430132  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (806.632µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.431161  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.431385  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.431533  110052 httplog.go:90] GET /healthz: (1.046248ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.433068  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.340644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.433628  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 13:48:07.434703  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (755.731µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.438201  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.178743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.438479  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 13:48:07.440056  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.273888ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.441655  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.271428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.441842  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 13:48:07.442716  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (667.546µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.444316  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.243635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.444495  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 13:48:07.445674  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (716.118µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.448124  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.740388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.448372  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 13:48:07.449877  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.184792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.453005  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.196344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.453359  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 13:48:07.454452  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (919.123µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.456377  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.366963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.456655  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 13:48:07.457985  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.111765ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.460905  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.362933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.461277  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 13:48:07.462336  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (792.544µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.464055  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.418458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.464288  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 13:48:07.465376  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (881.221µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.467333  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.467497  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 13:48:07.468474  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (756.183µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.470533  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.425548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.470813  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 13:48:07.471696  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (731.293µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.473816  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.734125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.473970  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 13:48:07.477002  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (2.900206ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.478998  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.673919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.479159  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 13:48:07.480188  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (812.926µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.482017  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.44343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.482366  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 13:48:07.483341  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (682.092µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.485038  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.389342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.485299  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 13:48:07.486445  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.022552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.487687  110052 cacher.go:763] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I0814 13:48:07.487711  110052 cacher.go:763] cacher (*rbac.ClusterRole): 2 objects queued in incoming channel.
I0814 13:48:07.488368  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.606454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.488578  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 13:48:07.489567  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (820.219µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.491427  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.569951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.491574  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 13:48:07.492686  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (958.559µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.495016  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.792709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.495298  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 13:48:07.496559  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (978.603µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.498930  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.905932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.499276  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 13:48:07.500188  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (741.912µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.501731  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.123732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.501974  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 13:48:07.502990  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (894.115µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.504433  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.187635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.504727  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 13:48:07.505910  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (834.222µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.507705  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.276915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.508162  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 13:48:07.509337  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (725.315µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.511319  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.382717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.511721  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 13:48:07.512648  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (699.93µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.514548  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.434943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.514923  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 13:48:07.516281  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.074469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.519236  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.519359  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.519524  110052 httplog.go:90] GET /healthz: (1.449083ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:07.520735  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.889103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.521056  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 13:48:07.522134  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (884.349µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.533002  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.533032  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.533076  110052 httplog.go:90] GET /healthz: (1.142292ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.540040  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.999945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.540305  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 13:48:07.559139  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.040271ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.579843  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.787503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.580054  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 13:48:07.599190  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.108509ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.619090  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.619120  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.619155  110052 httplog.go:90] GET /healthz: (1.047755ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:07.620491  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.349858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.621211  110052 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 13:48:07.632229  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.632261  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.632309  110052 httplog.go:90] GET /healthz: (1.287279ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.639273  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.229657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.660075  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.973616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.660421  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 13:48:07.679382  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.237195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.700227  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.123742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.700458  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 13:48:07.719194  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.719226  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.719269  110052 httplog.go:90] GET /healthz: (1.025579ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:07.720233  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.090767ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.731532  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.731567  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.731657  110052 httplog.go:90] GET /healthz: (1.038863ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.740380  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.264083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.740848  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 13:48:07.759174  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.143847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.780693  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.366445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.781969  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 13:48:07.799338  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.307694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.824807  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.824845  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.824891  110052 httplog.go:90] GET /healthz: (3.31085ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:07.831539  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.831574  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.831635  110052 httplog.go:90] GET /healthz: (919.334µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:07.832767  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (11.172003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.833158  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 13:48:07.840302  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (2.256284ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.861217  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.09995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.861497  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 13:48:07.879316  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.191873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.900205  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.085016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.900457  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 13:48:07.919766  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.919796  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.919831  110052 httplog.go:90] GET /healthz: (1.748424ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:07.920100  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.968325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.933257  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:07.933283  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:07.933335  110052 httplog.go:90] GET /healthz: (2.417557ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.940320  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.185468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.940547  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 13:48:07.959386  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.229177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.980561  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.378069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:07.980954  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 13:48:07.999916  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.674147ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.019511  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.019542  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.019579  110052 httplog.go:90] GET /healthz: (1.351919ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:08.022169  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.035453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.022392  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 13:48:08.035601  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.035635  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.035671  110052 httplog.go:90] GET /healthz: (1.554904ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.040557  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.235155ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.064298  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.230676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.064565  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 13:48:08.079276  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.151654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.100093  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.967532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.100406  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 13:48:08.119257  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.119287  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.119320  110052 httplog.go:90] GET /healthz: (1.066863ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:08.119369  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.00357ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.131815  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.131847  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.131882  110052 httplog.go:90] GET /healthz: (1.24619ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.140656  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.600735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.140887  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 13:48:08.159392  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.274761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.179949  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.796305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.180228  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 13:48:08.199950  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.291364ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.219894  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.219925  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.219959  110052 httplog.go:90] GET /healthz: (1.517781ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:08.220217  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.995319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.220451  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 13:48:08.231914  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.231944  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.231996  110052 httplog.go:90] GET /healthz: (1.253084ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.239430  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.392398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.260230  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.117838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.260470  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 13:48:08.279779  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.649776ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.300673  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.45876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.300932  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 13:48:08.319710  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.319743  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.319784  110052 httplog.go:90] GET /healthz: (1.532912ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:08.320545  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.422305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.332018  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.332048  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.332081  110052 httplog.go:90] GET /healthz: (1.202189ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.340218  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.340471  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 13:48:08.359600  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.482753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.380466  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.235054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.380745  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 13:48:08.399475  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.35414ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.419460  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.419503  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.419540  110052 httplog.go:90] GET /healthz: (1.40913ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:08.422730  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.199619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.423048  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 13:48:08.439108  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.439153  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.439191  110052 httplog.go:90] GET /healthz: (1.08987ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.440397  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (950.845µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.460500  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.385505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.460765  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 13:48:08.479416  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.276317ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.500387  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.294332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.500638  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 13:48:08.520035  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.520067  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.520115  110052 httplog.go:90] GET /healthz: (1.91056ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:08.520351  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.842976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.540443  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.540474  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.540508  110052 httplog.go:90] GET /healthz: (1.43491ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.540774  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.406308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.540966  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 13:48:08.559579  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.467363ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.580147  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.06974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.580399  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 13:48:08.599425  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.308212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.619142  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.619168  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.619195  110052 httplog.go:90] GET /healthz: (1.061741ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:08.620294  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.178076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.620479  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 13:48:08.631665  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.631696  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.631728  110052 httplog.go:90] GET /healthz: (1.02445ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.639272  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.158285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.660709  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.363684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.660953  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 13:48:08.680739  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.327098ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.700006  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.849188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.700283  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 13:48:08.719530  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.719563  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.719666  110052 httplog.go:90] GET /healthz: (1.581648ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:08.720429  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.381227ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.732816  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.732851  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.732891  110052 httplog.go:90] GET /healthz: (2.124054ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.740390  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.201783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.740573  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 13:48:08.759402  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.270251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.784307  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.82397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.785112  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 13:48:08.799787  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.650808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.824657  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.824716  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.824759  110052 httplog.go:90] GET /healthz: (4.356516ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:08.825168  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.749813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.825639  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 13:48:08.835512  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.835555  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.835606  110052 httplog.go:90] GET /healthz: (1.726942ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.839215  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.090734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.860447  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.31813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.861107  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 13:48:08.879619  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.488645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.900177  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.976272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.902121  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 13:48:08.919793  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.676307ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:08.920233  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.920255  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.920285  110052 httplog.go:90] GET /healthz: (2.079343ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:08.931812  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:08.931850  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:08.931894  110052 httplog.go:90] GET /healthz: (1.168658ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.940858  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.663335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.941140  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 13:48:08.960562  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.136526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.985225  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.023461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:08.985866  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 13:48:09.005648  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (7.510291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.020260  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.020288  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.020322  110052 httplog.go:90] GET /healthz: (1.397964ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:09.020828  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.577615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.021125  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 13:48:09.031563  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.031646  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.031702  110052 httplog.go:90] GET /healthz: (1.020973ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.039525  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.452669ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.060240  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.164313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.060465  110052 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 13:48:09.079426  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.296682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.081327  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.44288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.100072  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.937702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.100288  110052 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 13:48:09.119678  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.119730  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.119764  110052 httplog.go:90] GET /healthz: (1.569793ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:09.119994  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.506122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.121799  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.393873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.131517  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.131553  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.131659  110052 httplog.go:90] GET /healthz: (915.028µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.139760  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.690414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.140450  110052 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 13:48:09.159968  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.113013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.161905  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.371062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.180197  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.085739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.181124  110052 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 13:48:09.199682  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.566541ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.202055  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.46812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.219135  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.219172  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.219223  110052 httplog.go:90] GET /healthz: (1.033408ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:09.220452  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.314099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.220930  110052 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 13:48:09.231754  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.231879  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.232018  110052 httplog.go:90] GET /healthz: (1.344244ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.239783  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.592644ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.242137  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.721872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.260107  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.022102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.260344  110052 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 13:48:09.279688  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.524283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.281748  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.479021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.300219  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.070724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.300517  110052 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 13:48:09.319744  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.319777  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.319792  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.63613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.319828  110052 httplog.go:90] GET /healthz: (1.675134ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:09.321916  110052 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.566842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.331861  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.331894  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.331946  110052 httplog.go:90] GET /healthz: (1.227985ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.340393  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.345329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.340773  110052 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 13:48:09.359566  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.443646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.361840  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.714704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.380271  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.122826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.380504  110052 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 13:48:09.400188  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.566424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.402745  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.832981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.419613  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.419648  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.419691  110052 httplog.go:90] GET /healthz: (1.417024ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:09.421976  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.307872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.422332  110052 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 13:48:09.431978  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.432223  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.432569  110052 httplog.go:90] GET /healthz: (1.795706ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.439371  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.295778ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.441526  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.498643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.460544  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.399005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.461140  110052 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 13:48:09.479520  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.370999ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.481509  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.225852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.500562  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.330548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.500916  110052 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 13:48:09.519657  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.541217ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.519753  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.519790  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.519882  110052 httplog.go:90] GET /healthz: (1.724746ms) 0 [Go-http-client/1.1 127.0.0.1:40958]
I0814 13:48:09.521724  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.410382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.533246  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.533397  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.533543  110052 httplog.go:90] GET /healthz: (2.849926ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.540299  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.179036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.540555  110052 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 13:48:09.561151  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (2.838177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.565223  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.564582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.587903  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.945415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.588195  110052 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 13:48:09.599270  110052 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.163689ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.600992  110052 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.297206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.621304  110052 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.785395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.621479  110052 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:48:09.621503  110052 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:48:09.621533  110052 httplog.go:90] GET /healthz: (3.338899ms) 0 [Go-http-client/1.1 127.0.0.1:40964]
I0814 13:48:09.621886  110052 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 13:48:09.632107  110052 httplog.go:90] GET /healthz: (1.455173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.633540  110052 httplog.go:90] GET /api/v1/namespaces/default: (1.1528ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.635538  110052 httplog.go:90] POST /api/v1/namespaces: (1.590645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.636916  110052 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (943.906µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.640890  110052 httplog.go:90] POST /api/v1/namespaces/default/services: (3.52112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.642539  110052 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.014872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.645596  110052 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.059341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.719313  110052 httplog.go:90] GET /healthz: (1.140743ms) 200 [Go-http-client/1.1 127.0.0.1:40958]
W0814 13:48:09.720114  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720135  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720153  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720163  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720185  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720207  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720217  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720226  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720236  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720298  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:48:09.720310  110052 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 13:48:09.720330  110052 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 13:48:09.720340  110052 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 13:48:09.720824  110052 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.720842  110052 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.720888  110052 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.720908  110052 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.721157  110052 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.721167  110052 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.721228  110052 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.721241  110052 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.721601  110052 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.721615  110052 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.721702  110052 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.721717  110052 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.722013  110052 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.722025  110052 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.722089  110052 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.722104  110052 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.722477  110052 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.722492  110052 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.722542  110052 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.722557  110052 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.725059  110052 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.725084  110052 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 13:48:09.725875  110052 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (909.346µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41340]
I0814 13:48:09.726052  110052 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (483.428µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41336]
I0814 13:48:09.726876  110052 get.go:250] Starting watch for /api/v1/pods, rv=27251 labels= fields= timeout=6m55s
I0814 13:48:09.726958  110052 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (400.132µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41352]
I0814 13:48:09.727027  110052 get.go:250] Starting watch for /api/v1/services, rv=27613 labels= fields= timeout=5m7s
I0814 13:48:09.727471  110052 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (397.733µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41344]
I0814 13:48:09.727866  110052 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=27253 labels= fields= timeout=7m21s
I0814 13:48:09.728092  110052 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (667.751µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41350]
I0814 13:48:09.728208  110052 get.go:250] Starting watch for /api/v1/nodes, rv=27251 labels= fields= timeout=9m12s
I0814 13:48:09.728339  110052 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (417.477µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41348]
I0814 13:48:09.728723  110052 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=27252 labels= fields= timeout=6m20s
I0814 13:48:09.728856  110052 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (416.25µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41346]
I0814 13:48:09.728986  110052 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=27253 labels= fields= timeout=5m54s
I0814 13:48:09.729413  110052 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=27253 labels= fields= timeout=7m44s
I0814 13:48:09.729786  110052 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (6.681278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:48:09.730500  110052 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (6.093542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41338]
I0814 13:48:09.731300  110052 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (7.800379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:48:09.732628  110052 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=27251 labels= fields= timeout=6m50s
I0814 13:48:09.732705  110052 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=27251 labels= fields= timeout=9m25s
I0814 13:48:09.733717  110052 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=27253 labels= fields= timeout=9m29s
I0814 13:48:09.733786  110052 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (7.874684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41342]
I0814 13:48:09.734735  110052 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=27251 labels= fields= timeout=9m16s
I0814 13:48:09.820795  110052 shared_informer.go:211] caches populated
I0814 13:48:09.921035  110052 shared_informer.go:211] caches populated
I0814 13:48:10.022773  110052 shared_informer.go:211] caches populated
I0814 13:48:10.123001  110052 shared_informer.go:211] caches populated
I0814 13:48:10.223209  110052 shared_informer.go:211] caches populated
I0814 13:48:10.323402  110052 shared_informer.go:211] caches populated
I0814 13:48:10.423667  110052 shared_informer.go:211] caches populated
I0814 13:48:10.523999  110052 shared_informer.go:211] caches populated
I0814 13:48:10.624218  110052 shared_informer.go:211] caches populated
I0814 13:48:10.724435  110052 shared_informer.go:211] caches populated
I0814 13:48:10.726703  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:10.726705  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:10.727430  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:10.727973  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:10.732095  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:10.732327  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:10.733454  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:10.824635  110052 shared_informer.go:211] caches populated
I0814 13:48:10.924862  110052 shared_informer.go:211] caches populated
I0814 13:48:10.927965  110052 httplog.go:90] POST /api/v1/nodes: (2.231188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:10.930249  110052 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0814 13:48:10.930778  110052 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods: (2.361853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:10.931547  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/waiting-pod
I0814 13:48:10.931564  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/waiting-pod
I0814 13:48:10.931740  110052 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/waiting-pod", node "test-node-0"
I0814 13:48:10.931765  110052 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0814 13:48:10.931842  110052 framework.go:562] waiting for 30s for pod "waiting-pod" at permit
I0814 13:48:10.932862  110052 factory.go:615] Attempting to bind signalling-pod to test-node-0
I0814 13:48:10.933050  110052 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0814 13:48:10.933527  110052 scheduler.go:447] Failed to bind pod: permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod
E0814 13:48:10.933564  110052 scheduler.go:449] scheduler cache ForgetPod failed: pod 203beb60-32de-491d-9482-e028307f1bc2 wasn't assumed so cannot be forgotten
E0814 13:48:10.933593  110052 scheduler.go:605] error binding pod: Post http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod/binding: dial tcp 127.0.0.1:35771: connect: connection refused
E0814 13:48:10.933616  110052 factory.go:566] Error scheduling permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod: Post http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod/binding: dial tcp 127.0.0.1:35771: connect: connection refused; retrying
I0814 13:48:10.933645  110052 factory.go:624] Updating pod condition for permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0814 13:48:10.934326  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35771/apis/events.k8s.io/v1beta1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/events: dial tcp 127.0.0.1:35771: connect: connection refused' (may retry after sleeping)
E0814 13:48:10.934386  110052 scheduler.go:280] Error updating the condition of the pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod: Put http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod/status: dial tcp 127.0.0.1:35771: connect: connection refused
E0814 13:48:10.934385  110052 factory.go:599] Error getting pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod for retry: Get http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod: dial tcp 127.0.0.1:35771: connect: connection refused; retrying...
I0814 13:48:10.937811  110052 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/waiting-pod/binding: (4.415569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:10.938095  110052 scheduler.go:614] pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0814 13:48:10.940925  110052 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/events: (2.582323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
E0814 13:48:11.134861  110052 factory.go:599] Error getting pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod for retry: Get http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod: dial tcp 127.0.0.1:35771: connect: connection refused; retrying...
E0814 13:48:11.535490  110052 factory.go:599] Error getting pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod for retry: Get http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod: dial tcp 127.0.0.1:35771: connect: connection refused; retrying...
I0814 13:48:11.726858  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:11.726958  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:11.727616  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:11.728121  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:11.732250  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:11.732476  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:11.733648  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:12.336102  110052 factory.go:599] Error getting pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod for retry: Get http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod: dial tcp 127.0.0.1:35771: connect: connection refused; retrying...
I0814 13:48:12.727052  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:12.727145  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:12.727784  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:12.728266  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:12.732413  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:12.732615  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:12.733796  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:13.727433  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:13.727533  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:13.727929  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:13.728448  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:13.732608  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:13.732754  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:13.733959  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:13.936728  110052 factory.go:599] Error getting pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod for retry: Get http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod: dial tcp 127.0.0.1:35771: connect: connection refused; retrying...
I0814 13:48:14.727686  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:14.727686  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:14.728095  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:14.728641  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:14.732779  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:14.732884  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:14.734109  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:15.317612  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35285/apis/events.k8s.io/v1beta1/namespaces/permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/events: dial tcp 127.0.0.1:35285: connect: connection refused' (may retry after sleeping)
I0814 13:48:15.728767  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:15.729161  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:15.729227  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:15.729269  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:15.733719  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:15.733740  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:15.734828  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:16.728952  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:16.729333  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:16.729397  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:16.729444  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:16.733872  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:16.733910  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:16.735019  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:17.137341  110052 factory.go:599] Error getting pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod for retry: Get http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod: dial tcp 127.0.0.1:35771: connect: connection refused; retrying...
I0814 13:48:17.729175  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:17.729438  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:17.729526  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:17.729556  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:17.734060  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:17.734063  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:17.735212  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:18.729389  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:18.729575  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:18.729648  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:18.729746  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:18.734214  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:18.734223  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:18.735340  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:19.359072  110052 factory.go:599] Error getting pod permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/test-pod for retry: Get http://127.0.0.1:35285/api/v1/namespaces/permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/pods/test-pod: dial tcp 127.0.0.1:35285: connect: connection refused; retrying...
I0814 13:48:19.634578  110052 httplog.go:90] GET /api/v1/namespaces/default: (1.83902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:19.636440  110052 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.321012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:19.638006  110052 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.09726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:19.729716  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:19.729778  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:19.729894  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:19.729991  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:19.734367  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:19.734390  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:19.735542  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:20.729915  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:20.729991  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:20.730112  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:20.730120  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:20.735619  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:20.735678  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:20.735855  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:21.730169  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:21.730289  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:21.730330  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:21.730372  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:21.735777  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:21.735804  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:21.736030  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:22.246106  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35771/apis/events.k8s.io/v1beta1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/events: dial tcp 127.0.0.1:35771: connect: connection refused' (may retry after sleeping)
I0814 13:48:22.730391  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:22.730441  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:22.730514  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:22.730569  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:22.735944  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:22.735997  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:22.736162  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:23.538702  110052 factory.go:599] Error getting pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod for retry: Get http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod: dial tcp 127.0.0.1:35771: connect: connection refused; retrying...
I0814 13:48:23.730608  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:23.730629  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:23.730663  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:23.730878  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:23.736149  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:23.736209  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:23.736294  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:24.730826  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:24.730990  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:24.731041  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:24.731135  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:24.736419  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:24.736486  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:24.736551  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:25.582022  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35285/apis/events.k8s.io/v1beta1/namespaces/permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/events: dial tcp 127.0.0.1:35285: connect: connection refused' (may retry after sleeping)
I0814 13:48:25.731114  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:25.731163  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:25.731263  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:25.731289  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:25.736683  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:25.736685  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:25.736699  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:26.731271  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:26.731993  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:26.732092  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:26.732116  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:26.736943  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:26.736955  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:26.736995  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:27.731515  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:27.732119  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:27.732236  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:27.732284  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:27.737136  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:27.737183  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:27.737198  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:28.732306  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:28.732305  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:28.732544  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:28.732597  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:28.737251  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:28.737310  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:28.737336  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:29.634752  110052 httplog.go:90] GET /api/v1/namespaces/default: (1.74677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:29.636762  110052 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.487321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:29.638172  110052 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.004922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:29.732490  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:29.732535  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:29.732824  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:29.732875  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:29.737627  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:29.737664  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:29.737676  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:30.732708  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:30.732764  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:30.732948  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:30.733005  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:30.737753  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:30.737788  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:30.737919  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:31.733107  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:31.733197  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:31.733245  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:31.733214  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:31.737915  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:31.737965  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:31.738105  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:32.733274  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:32.733429  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:32.734157  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:32.734423  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:32.738053  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:32.738174  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:32.738258  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:33.504185  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35771/apis/events.k8s.io/v1beta1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/events: dial tcp 127.0.0.1:35771: connect: connection refused' (may retry after sleeping)
I0814 13:48:33.734341  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:33.734565  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:33.737686  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:33.737761  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:33.738550  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:33.738551  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:33.738573  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:34.734550  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:34.734776  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:34.737809  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:34.737959  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:34.738768  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:34.738775  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:34.738807  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:35.734811  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:35.735056  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:35.738001  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:35.738155  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:35.738938  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:35.738994  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:35.739013  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:36.339236  110052 factory.go:599] Error getting pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod for retry: Get http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod: dial tcp 127.0.0.1:35771: connect: connection refused; retrying...
I0814 13:48:36.734983  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:36.735195  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:36.738086  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:36.738336  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:36.739099  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:36.739107  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:36.739139  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:36.892886  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35285/apis/events.k8s.io/v1beta1/namespaces/permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/events: dial tcp 127.0.0.1:35285: connect: connection refused' (may retry after sleeping)
I0814 13:48:37.735161  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:37.735472  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:37.738249  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:37.738695  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:37.739247  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:37.739257  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:37.739278  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:38.735378  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:38.735691  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:38.738353  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:38.738847  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:38.739381  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:38.739407  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:38.739413  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:39.634419  110052 httplog.go:90] GET /api/v1/namespaces/default: (1.463796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:39.636684  110052 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.572111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:39.638627  110052 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.368337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:39.735644  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:39.735862  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:39.738546  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:39.739002  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:39.739835  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:39.739857  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:39.739870  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:40.736085  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:40.736110  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:40.738724  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:40.739258  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:40.739987  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:40.739994  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:40.740089  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:40.934203  110052 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods: (2.3541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:40.934424  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:40.934447  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:40.934653  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:40.934740  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:40.937714  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.671851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:40.937983  110052 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/events: (2.511831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45268]
I0814 13:48:40.938003  110052 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod/status: (3.016548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:40.940002  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.546899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:40.940254  110052 generic_scheduler.go:1191] Node test-node-0 is a potential node for preemption.
I0814 13:48:40.942776  110052 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod/status: (2.018567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:40.945576  110052 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/waiting-pod: (2.403854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:40.947449  110052 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/events: (1.312506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.036560  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.611128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.136832  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.863278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.236877  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.935454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.337112  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.099566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.437037  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.103154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.537042  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.584671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.637180  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.092436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.736252  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:41.736269  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:41.736768  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.796146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.738865  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:41.739475  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:41.740117  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:41.740135  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:41.740383  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:41.836512  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.562151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:41.936661  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.501132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:42.036656  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.743897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:42.136374  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.468851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:42.236724  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.788461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:42.336794  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.755831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:42.438010  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.131136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:42.542827  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.760627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:42.636452  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.524226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:42.726523  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:42.726561  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:42.726786  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:42.726870  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:42.729371  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.733791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:42.729423  110052 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/events: (1.837176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:42.729804  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.615014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41488]
I0814 13:48:42.736118  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.27015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:42.736536  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:42.736633  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:42.738975  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:42.739617  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:42.740267  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:42.740270  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:42.740543  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:42.836663  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.704503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:42.936884  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.703357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.036614  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.555988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.136696  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.75266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.236879  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.840057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.336573  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.660404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.440563  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (5.569032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.536545  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.613251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.636859  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.866477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.736731  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:43.736737  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:43.737893  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.916486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.739203  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:43.739315  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:43.739330  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:43.739449  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:43.739480  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:43.740847  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:43.740851  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:43.740880  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:43.740889  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:43.741992  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.555398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:43.742269  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.853393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:43.743927  110052 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/events/preemptor-pod.15bace2f3bcec67f: (3.454366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45480]
I0814 13:48:43.836820  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.884648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:43.936834  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.84786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:44.036514  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.578897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:44.136575  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.634024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:44.236851  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.845983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:44.337046  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.033242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
E0814 13:48:44.426103  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35771/apis/events.k8s.io/v1beta1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/events: dial tcp 127.0.0.1:35771: connect: connection refused' (may retry after sleeping)
I0814 13:48:44.436614  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.617884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:44.537040  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.01552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:44.636740  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.64827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:44.736975  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.934572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:44.737293  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:44.737332  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:44.739390  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:44.739579  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:44.739716  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:44.739940  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:44.740060  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:44.740984  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:44.741020  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:44.741025  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:44.741035  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:44.742031  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.63851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:44.742991  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.309484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:44.840158  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (5.046352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:44.937133  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.049406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
E0814 13:48:44.959603  110052 factory.go:599] Error getting pod permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/test-pod for retry: Get http://127.0.0.1:35285/api/v1/namespaces/permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/pods/test-pod: dial tcp 127.0.0.1:35285: connect: connection refused; retrying...
I0814 13:48:45.036672  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.728464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.136561  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.629544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.236902  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.905687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.336938  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.979467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.436739  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.66961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.536799  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.858694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.637316  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.08637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.736719  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.729325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.737478  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:45.737571  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:45.739648  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:45.739768  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:45.739785  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:45.740166  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:45.740210  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:45.741124  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:45.741474  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:45.741533  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:45.741547  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:45.742458  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.524387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:45.743139  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.931076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.836727  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.776619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:45.936793  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.786167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.036393  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.488886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.136722  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.735815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.236474  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.555433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.336937  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.052901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.436702  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.776543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.536942  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.842071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.638513  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.455128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.736638  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.578174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.737985  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:46.737986  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:46.739804  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:46.739949  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:46.739968  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:46.740087  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:46.740137  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:46.741264  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:46.741793  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:46.742036  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:46.742051  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:46.742507  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.15091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:46.742512  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.022821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:46.836740  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.815657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:46.939098  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (4.166607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:47.036437  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.510925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:47.136481  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.532029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:47.236447  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.497899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:47.336722  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.716949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:47.436362  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.465185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:47.536281  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.345034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:47.637240  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.182838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
E0814 13:48:47.733935  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35285/apis/events.k8s.io/v1beta1/namespaces/permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/events: dial tcp 127.0.0.1:35285: connect: connection refused' (may retry after sleeping)
I0814 13:48:47.736345  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.501876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:47.738132  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:47.738157  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:47.739985  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:47.740160  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:47.740186  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:47.740394  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:47.740467  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:47.741430  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:47.741941  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:47.742191  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:47.742247  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:47.742436  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.492218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:47.742734  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.797636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:47.836232  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.293642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:47.936758  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.801596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.036696  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.716298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.136690  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.758759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.236759  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.771995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.336621  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.639364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.436498  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.53503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.536386  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.38318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.637481  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.537377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.737174  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.259514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.738330  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:48.738655  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:48.740166  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:48.740297  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:48.740310  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:48.740432  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:48.740471  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:48.741727  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:48.742098  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:48.742389  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:48.742412  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:48.743297  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.88036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:48.744131  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.127099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:48.837020  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.905918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:48.936615  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.716013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.036868  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.90552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.136356  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.375182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.237355  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.47139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.336612  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.611563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.436887  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.823309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.536235  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.368557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.636726  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.806913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:49.636859  110052 httplog.go:90] GET /api/v1/namespaces/default: (3.673816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.638485  110052 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.205968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.639998  110052 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.122559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.736458  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.453066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.738493  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:49.738787  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:49.740276  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:49.740391  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:49.740407  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:49.740555  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:49.740625  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:49.741891  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:49.742241  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:49.742523  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:49.742701  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:49.743134  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.277052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:49.743312  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.385362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:49.836316  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.311991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:49.942792  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (7.829808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.038010  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.019807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.136316  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.325206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.237019  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.022712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.336436  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.480204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.436759  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.780838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.536404  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.462386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.645375  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (10.401973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.736532  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.588309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.738706  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:50.738922  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:50.740451  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:50.740640  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:50.740665  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:50.740956  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:50.741066  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:50.742025  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:50.742797  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:50.742827  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:50.742980  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.472007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:50.743253  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.603621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:50.743342  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:50.836964  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.739091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:50.936512  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.583718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.036387  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.441015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.136738  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.823512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.236484  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.54966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.336603  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.628239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.437257  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.283835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.536920  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.87695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.638682  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.447914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.737331  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.213313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.738914  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:51.739138  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:51.740647  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:51.740863  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:51.740894  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:51.741058  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:51.741120  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:51.742171  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:51.742984  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:51.743091  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:51.743163  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.786991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:51.743633  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:51.743753  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.333555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:51.836725  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.649498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:51.962072  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (25.816777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.037274  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.075293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.136803  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.468032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.236394  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.47219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.337225  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.128725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.436851  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.82306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.541063  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.218565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.636683  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.660134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.736868  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.852522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.739135  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:52.739270  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:52.740839  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:52.740982  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:52.740994  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:52.741122  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:52.741153  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:52.742395  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:52.743132  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:52.743230  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:52.744817  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.181098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:52.745368  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.657326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.746251  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:52.836789  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.880547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:52.936516  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.591771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.036920  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.91741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.137294  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.768276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.236299  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.283618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.336488  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.560425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.436288  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.3428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.536400  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.388513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.636452  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.544389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.736813  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.823733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.739282  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:53.739391  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:53.740987  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:53.741128  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:53.741147  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:53.741320  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:53.741363  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:53.742730  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:53.743293  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:53.743317  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:53.743764  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.315969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:53.744048  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.006462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:53.746412  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:53.836698  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.731262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:53.936806  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.845381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.036482  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.544978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.136562  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.625266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.236775  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.785639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.336328  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.373268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.436425  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.53861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.536767  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.462653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.636473  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.58617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.736696  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.725627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.739485  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:54.739511  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:54.741155  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:54.741315  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:54.741334  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:54.741461  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:54.741520  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:54.742889  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:54.743450  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:54.743450  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:54.744970  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.074872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:54.745271  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.412461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:54.746705  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:54.836871  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.841126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:54.937326  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.33596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.037030  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.046656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.136875  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.97012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.236497  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.590956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.336547  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.642863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.436850  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.771624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.536652  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.747648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.636755  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.815374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.736790  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.905904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.739664  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:55.739772  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:55.741310  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:55.741439  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:55.741458  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:55.741566  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:55.741621  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:55.742980  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:55.743667  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.312088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:55.743676  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.776739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:55.743965  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:55.743978  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:55.746846  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:48:55.781603  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35771/apis/events.k8s.io/v1beta1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/events: dial tcp 127.0.0.1:35771: connect: connection refused' (may retry after sleeping)
I0814 13:48:55.836720  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.670439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:55.937190  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.27007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.037567  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.560114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.136934  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.874947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.236230  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.311197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.336646  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.711203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.436429  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.549156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.536846  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.900677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.636488  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.411862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.737281  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.356872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.739967  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:56.744153  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:56.744719  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:56.744775  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:56.744782  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:56.744806  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:56.744886  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:56.744898  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:56.744991  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:56.745045  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:56.747014  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:56.747497  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.102803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:56.747518  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.214499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:56.836729  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.812561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:56.937463  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.504529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.036835  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.941081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.141808  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (6.815194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.237117  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.855165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.336450  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.546121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.437516  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.550806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.536391  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.454124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.636988  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.760257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.736923  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.935076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.740117  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:57.744321  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:57.744820  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:57.744843  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:57.744924  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:57.744926  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:57.745078  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:57.745100  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:57.745232  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:57.745283  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:57.747035  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.456342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:57.747136  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:57.747524  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.801471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
E0814 13:48:57.825526  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35285/apis/events.k8s.io/v1beta1/namespaces/permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/events: dial tcp 127.0.0.1:35285: connect: connection refused' (may retry after sleeping)
I0814 13:48:57.837839  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.897456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:57.936712  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.682351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.036497  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.512712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.136774  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.849087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.237770  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.762019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.336866  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.895075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.436315  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.338425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.536394  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.430799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.636387  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.459862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.736425  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.514939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.740284  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:58.744502  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:58.744965  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:58.744979  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:58.744981  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:58.745127  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:58.745212  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:58.745222  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:58.745382  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:58.745425  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:58.747293  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:58.747500  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.622898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:58.747655  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.28738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.836509  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.534532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:58.936443  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.519057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.036625  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.685962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.136728  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.708197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.236248  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.364542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.336879  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.950877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.436359  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.384431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.536606  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.620789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.634539  110052 httplog.go:90] GET /api/v1/namespaces/default: (1.254183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.636206  110052 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.161425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.636443  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.131416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:59.637670  110052 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.08807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.736951  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.727123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.740439  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:59.744685  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:59.745131  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:59.745161  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:59.745188  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:59.745258  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:59.745355  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:59.745376  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:48:59.745542  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:48:59.745665  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:48:59.747415  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:48:59.747699  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.736623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:48:59.748464  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.473854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.836474  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.496836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:48:59.936462  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.479896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.036434  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.525757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.136426  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.49958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.236423  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.408917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.336500  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.55334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.436491  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.569533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.536771  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.623027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.636738  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.723462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.737077  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.080676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.740621  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:00.744898  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:00.745256  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:00.745304  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:00.745306  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:00.745328  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:00.745415  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:00.745432  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:00.745553  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:00.745632  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:00.747334  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.367008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:00.747472  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.154878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.747765  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:00.836412  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.449181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:00.936419  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.478048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.036487  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.553167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.136609  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.675915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.237000  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.016698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.336321  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.409927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.436702  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.733389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.536536  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.485817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.638486  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.212128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.736977  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.684899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.740836  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:01.745100  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:01.745422  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:01.745424  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:01.745434  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:01.745446  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:01.745569  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:01.745599  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:01.745717  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:01.745751  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:01.747926  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:01.748046  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.651364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:01.748262  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.170837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:01.836250  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.350549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:01.936430  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.425357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
E0814 13:49:01.939837  110052 factory.go:599] Error getting pod permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/signalling-pod for retry: Get http://127.0.0.1:35771/api/v1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/pods/signalling-pod: dial tcp 127.0.0.1:35771: connect: connection refused; retrying...
I0814 13:49:02.036265  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.333889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:02.136618  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.680433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:02.237265  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.880691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:02.336395  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.429232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:02.436563  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.619821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:02.536376  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.361327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:02.636439  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.427825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:02.736561  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.570308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:02.741005  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:02.745270  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:02.745573  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:02.745615  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:02.745699  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:02.745571  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:02.745848  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:02.745861  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:02.746015  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:02.746075  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:02.748143  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.618902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:02.748815  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.344403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:02.748989  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:02.836323  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.392731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:02.936552  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.605639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.036408  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.418359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.136566  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.559825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.236322  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.412328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.336493  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.511444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.436458  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.505069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.536838  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.519091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.636476  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.515455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.736428  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.485133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.741202  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:03.745426  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:03.745663  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:03.745761  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:03.745802  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:03.745920  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:03.745928  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:03.745931  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:03.746088  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:03.746134  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:03.747971  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.240362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.748321  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.659693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:03.749136  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:03.836384  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.448218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:03.936954  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.997054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.036415  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.518052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.136385  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.468672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.236531  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.596468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.336362  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.444312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.436461  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.526456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.540036  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (4.377022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.636744  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.750558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.736790  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.772072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.741367  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.745647  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.745835  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.745888  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.745939  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.746134  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.746308  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:04.746332  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:04.746508  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:04.746561  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:04.748485  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.610792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:04.748488  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.501724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:04.749377  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.836779  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.866041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:04.936753  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.87008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.036571  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.603106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.136402  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.418728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.236064  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.168529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.336386  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.444047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.436253  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.358869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.537004  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.965339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.636395  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.426148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.736747  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.776673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.741526  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.745873  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.746010  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.746042  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.746282  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.746416  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:05.746426  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.746440  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:05.746746  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:05.746856  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:05.748480  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.287082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.748523  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.265164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:05.749638  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.836721  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.809157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:05.936318  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.44261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.036538  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.579748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.136686  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.745569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.236867  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.960748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.336668  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.675401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.342984  110052 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.127803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.344929  110052 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.049029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.346033  110052 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (732.949µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.440824  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (5.860103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.536171  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.201691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
E0814 13:49:06.591179  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35771/apis/events.k8s.io/v1beta1/namespaces/permit-plugin52d76190-5932-48aa-99fd-ab6e89489c1d/events: dial tcp 127.0.0.1:35771: connect: connection refused' (may retry after sleeping)
I0814 13:49:06.636717  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.748381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.736381  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.294748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.741671  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.746056  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.746178  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.746286  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.746433  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.746558  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:06.746594  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:06.746729  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:06.746769  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:06.747515  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.748538  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.377001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:06.748561  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.50979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.749862  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.836457  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.496603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:06.936870  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.943849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.037528  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.281479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.136452  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.501996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.236927  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.979876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.336254  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.328866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.436537  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.580031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.536040  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.17764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.636558  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.561979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.736357  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.428532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.741824  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.746249  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.746256  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.746371  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.746698  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.746832  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:07.746853  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:07.747006  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:07.747060  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:07.747748  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.748803  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.403784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:07.748826  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.54276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:07.750025  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.837349  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.4663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:07.936253  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.383157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.036552  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.600779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.137415  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.358018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.236445  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.473415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.337301  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.349073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.436887  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.673269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.536397  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.459594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.636550  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.582039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.736227  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.311502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.742006  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.746420  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.746422  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.746533  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.746878  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.747010  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:08.747022  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:08.747151  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:08.747193  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:08.747878  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.748794  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.272752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:08.749078  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.661823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.750185  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.836207  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.300224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:08.936421  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.542354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.036645  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.653237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
E0814 13:49:09.065117  110052 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35285/apis/events.k8s.io/v1beta1/namespaces/permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/events: dial tcp 127.0.0.1:35285: connect: connection refused' (may retry after sleeping)
I0814 13:49:09.136658  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.713351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.236724  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.811864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.336688  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.747064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.436195  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.321187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.536309  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.398549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.634897  110052 httplog.go:90] GET /api/v1/namespaces/default: (1.471264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.637287  110052 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.138064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.637652  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.747108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:09.639035  110052 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (977.275µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.736394  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.453353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.742405  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.747813  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.747818  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.747925  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.747994  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.748508  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.750351  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.836202  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.338084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:09.936873  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.944938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.036569  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.571545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.136514  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.517481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.236834  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.917969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.336476  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.540285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.436704  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.799635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.536680  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.722205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.636732  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.746054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.731142  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:10.731173  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:10.731302  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:10.731340  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:10.733542  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.612038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.733966  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.56341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.735803  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.016277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.742608  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.747955  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.748090  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.748112  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.748167  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.748235  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:10.748253  110052 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:10.748378  110052 factory.go:550] Unable to schedule preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:10.748423  110052 factory.go:624] Updating pod condition for preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:10.748746  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.750194  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.331421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.750501  110052 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.750790  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (2.095513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.836882  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.935284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.936484  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.552872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.938288  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.406667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.940191  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/waiting-pod: (1.245718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.945701  110052 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/waiting-pod: (5.11766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.949979  110052 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (3.691792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.950141  110052 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:10.950195  110052 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/preemptor-pod
I0814 13:49:10.952790  110052 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/events: (2.268476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45270]
I0814 13:49:10.954128  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/waiting-pod: (1.594113ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.956812  110052 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin445b2aee-e6b6-44ee-ae21-4fd0e1964ede/pods/preemptor-pod: (1.232956ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
E0814 13:49:10.957512  110052 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 13:49:10.957918  110052 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=27613&timeout=5m7s&timeoutSeconds=307&watch=true: (1m1.231388118s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41336]
I0814 13:49:10.957933  110052 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=27252&timeout=6m20s&timeoutSeconds=380&watch=true: (1m1.229511568s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41350]
I0814 13:49:10.957951  110052 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=27253&timeout=7m21s&timeoutSeconds=441&watch=true: (1m1.230553239s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41352]
I0814 13:49:10.957982  110052 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=27253&timeout=7m44s&timeoutSeconds=464&watch=true: (1m1.228784724s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41346]
I0814 13:49:10.957918  110052 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=27253&timeout=9m29s&timeoutSeconds=569&watch=true: (1m1.224487549s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40964]
I0814 13:49:10.958046  110052 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=27253&timeout=5m54s&timeoutSeconds=354&watch=true: (1m1.229307555s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41348]
I0814 13:49:10.958100  110052 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=27251&timeout=6m55s&timeoutSeconds=415&watch=true: (1m1.231571994s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41340]
I0814 13:49:10.958107  110052 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=27251&timeout=9m12s&timeoutSeconds=552&watch=true: (1m1.230279781s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41344]
I0814 13:49:10.958124  110052 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=27251&timeout=6m50s&timeoutSeconds=410&watch=true: (1m1.225836784s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41338]
I0814 13:49:10.958129  110052 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=27251&timeout=9m16s&timeoutSeconds=556&watch=true: (1m1.223775792s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41342]
I0814 13:49:10.958209  110052 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=27251&timeout=9m25s&timeoutSeconds=565&watch=true: (1m1.225895792s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40958]
I0814 13:49:10.961961  110052 httplog.go:90] DELETE /api/v1/nodes: (3.772557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.962099  110052 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 13:49:10.963321  110052 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.024028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
I0814 13:49:10.965017  110052 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.287199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45318]
--- FAIL: TestPreemptWithPermitPlugin (64.80s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-134058.xml

Find permit-pluginbde5636c-d8c0-4643-8837-e830fce403d7/test-pod mentions in log files | View test history on testgrid


Show 2452 Passed Tests