ResultFAILURE
Tests 1 failed / 578 succeeded
Started2018-12-07 00:29
Elapsed25m48s
Versionv1.14.0-alpha.0.897+8a167afe7042db
Buildergke-prow-default-pool-3c8994a8-639j
Refs master:1cd6ccb3
71684:3c055aa4
71734:7ee0e002
pod0e55998b-f9b7-11e8-92b6-0a580a6c0310
infra-commitd3eb51cda
pod0e55998b-f9b7-11e8-92b6-0a580a6c0310
repok8s.io/kubernetes
repo-commit8a167afe7042dbbcde1c65b2cd0e20a8f5cdf423
repos{u'k8s.io/kubernetes': u'master:1cd6ccb34458def1347ae96b2e8aacb5338f8e1d,71684:3c055aa4b47232bf7d6b5d5a0901dae239e33c59,71734:7ee0e002707de821f469381af54c5106e6f0c933'}

Test Failures


k8s.io/kubernetes/test/integration/auth TestNodeAuthorizer 12s

go test -v k8s.io/kubernetes/test/integration/auth -run TestNodeAuthorizer$
W1207 00:44:19.544639  117427 feature_gate.go:211] Setting GA feature gate CSIPersistentVolume=true. It will be removed in a future release.
I1207 00:44:19.545939  117427 serving.go:311] Generated self-signed cert (/tmp/kubernetes-kube-apiserver153857854/apiserver.crt, /tmp/kubernetes-kube-apiserver153857854/apiserver.key)
I1207 00:44:19.545951  117427 server.go:557] external host was not specified, using 127.0.0.1
W1207 00:44:20.375755  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.376070  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.376099  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.376119  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.376135  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.376219  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.376238  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.376248  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.376258  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.376292  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.377412  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.377466  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.377523  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.377570  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.377843  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.378055  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 00:44:20.378271  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I1207 00:44:20.378311  117427 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I1207 00:44:20.378326  117427 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I1207 00:44:20.379481  117427 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I1207 00:44:20.379499  117427 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I1207 00:44:20.381342  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.381367  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.381410  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.381493  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.381900  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:44:20.410552  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I1207 00:44:20.411586  117427 master.go:228] Using reconciler: lease
I1207 00:44:20.411750  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.411780  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.411822  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.411872  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.412242  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.414358  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.414379  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.414419  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.414469  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.414822  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.415316  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.415338  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.415396  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.415471  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.415886  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.415906  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.415933  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.415922  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.415985  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.416265  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.416511  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.416532  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.416667  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.416710  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.417320  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.417406  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.417423  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.417456  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.417498  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.418081  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.418107  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.418145  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.418228  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.418258  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.418494  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.418848  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.418868  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.418906  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.418952  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.419154  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.419482  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.419504  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.419531  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.419735  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.420159  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.420478  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.420494  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.420524  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.420614  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.421049  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.421193  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.421229  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.421267  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.421327  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.421730  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.421868  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.421883  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.421911  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.421949  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.422394  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.422595  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.422615  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.422648  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.422700  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.423057  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.423346  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.423369  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.423427  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.423467  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.424378  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.424679  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.424700  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.424731  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.424806  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.425076  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.425252  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.425297  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.425327  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.425363  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.425683  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.425705  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.425733  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.425843  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.425868  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.426767  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.426938  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.426962  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.426996  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.427067  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.427365  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.536992  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.537030  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.537087  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.537166  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.537673  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.538167  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.538191  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.538230  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.538343  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.538745  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.539095  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.539113  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.539147  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.539217  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.539787  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.539803  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.539860  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.539963  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.540034  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.540292  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.540651  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.540676  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.540706  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.540769  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.541006  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.541424  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.541440  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.541470  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.541549  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.541931  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.542179  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.542195  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.542226  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.542425  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.542665  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.543031  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.543053  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.543084  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.543133  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.543534  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.543810  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.543835  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.543864  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.543916  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.544417  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.544710  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.544725  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.544751  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.544841  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.545114  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.545452  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.545469  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.545498  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.545535  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.546223  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.546520  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.546537  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.546567  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.546617  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.547185  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.547211  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.547253  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.547386  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.547505  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.548375  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.548419  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.548489  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.548521  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.548695  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.549340  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.549579  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.549598  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.549639  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.549688  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.550247  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.550266  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.550173  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.550338  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.550431  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.551460  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.551842  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.551865  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.551902  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.552009  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.552438  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.552888  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.552912  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.552965  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.553004  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.553516  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.553529  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.553545  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.553574  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.553645  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.554318  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.554518  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.554539  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.554661  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.554697  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.555015  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.555510  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.555536  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.555575  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.555643  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.556113  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.558100  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.558319  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.558412  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.558482  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.559043  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.559216  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.559917  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.559979  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.560031  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.560380  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.560502  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.560524  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.560559  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.560631  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.561036  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.561171  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.561193  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.561226  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.561289  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.561558  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.561665  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.561687  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.561717  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.561776  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.562232  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.562367  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.562390  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.562469  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.562811  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.563569  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.563804  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.563827  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.563856  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.563905  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.564381  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.564489  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.564541  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.564589  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.564660  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.565126  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.565152  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.565197  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.565356  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.565517  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.566167  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.566191  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.566224  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.566343  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.566570  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.567221  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.568857  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.568883  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.568910  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.568983  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.569287  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.569605  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.569633  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.569681  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.569732  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.569978  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.570173  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.570191  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.570232  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.570309  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.571193  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.571293  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.571305  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.571337  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.571392  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.571648  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.571945  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.571959  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.571985  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.572050  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.572658  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.572782  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.572799  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.572830  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.572888  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.573206  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.573564  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.573582  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.573612  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.573813  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.574309  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.574325  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.574360  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.574443  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.574635  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.574924  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.575521  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.575544  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.575579  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.575620  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.575876  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.576180  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.576237  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.576304  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.576446  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.576880  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.577150  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.577172  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.577199  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.577237  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.577496  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.577791  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.577816  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.577865  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.577928  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.578453  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.578478  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.578506  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.578592  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.578786  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.580602  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.580855  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.580871  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.580899  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.580956  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.581248  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.581556  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.581578  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.581607  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.581708  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.582017  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.582270  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.582308  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.582363  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.582409  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.583358  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.583384  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.583428  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.583475  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.583518  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.583945  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.584097  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.584123  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.584154  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.584214  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.584992  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.585224  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.585258  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.585335  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.585411  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.586004  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.586263  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.586303  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.586416  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.586520  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.587042  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.587256  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.587291  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.587355  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.587392  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.587983  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.588184  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.588225  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.588271  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.588404  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.588942  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.589143  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.589165  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.589199  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.589244  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.589750  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.589794  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.589869  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.589982  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.590161  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.590510  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.590880  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:20.590897  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:20.590938  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:20.591044  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:20.591541  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
[restful] 2018/12/07 00:44:21 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:43091/swaggerapi
[restful] 2018/12/07 00:44:21 log.go:33: [restful/swagger] https://127.0.0.1:43091/swaggerui/ is mapped to folder /swagger-ui/
I1207 00:44:21.375766  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:21.375842  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:21.375914  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:21.375979  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:21.378524  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
[restful] 2018/12/07 00:44:23 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:43091/swaggerapi
[restful] 2018/12/07 00:44:23 log.go:33: [restful/swagger] https://127.0.0.1:43091/swaggerui/ is mapped to folder /swagger-ui/
I1207 00:44:23.299079  117427 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I1207 00:44:23.299108  117427 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
W1207 00:44:23.300541  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I1207 00:44:23.300653  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:23.300671  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:23.300715  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:23.300812  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:23.301251  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:23.301945  117427 clientconn.go:551] parsed scheme: ""
I1207 00:44:23.301968  117427 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 00:44:23.302004  117427 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 00:44:23.302047  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 00:44:23.302431  117427 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:44:23.304707  117427 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I1207 00:44:28.405080  117427 secure_serving.go:116] Serving securely on 127.0.0.1:43091
I1207 00:44:28.405265  117427 controller.go:84] Starting OpenAPI AggregationController
I1207 00:44:28.405265  117427 apiservice_controller.go:90] Starting APIServiceRegistrationController
I1207 00:44:28.405315  117427 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1207 00:44:28.412674  117427 available_controller.go:314] Starting AvailableConditionController
I1207 00:44:28.412704  117427 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1207 00:44:28.412745  117427 autoregister_controller.go:136] Starting autoregister controller
I1207 00:44:28.412782  117427 cache.go:32] Waiting for caches to sync for autoregister controller
I1207 00:44:28.412814  117427 crd_finalizer.go:242] Starting CRDFinalizer
I1207 00:44:28.412836  117427 crdregistration_controller.go:112] Starting crd-autoregister controller
I1207 00:44:28.412849  117427 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller
I1207 00:44:28.412864  117427 customresource_discovery_controller.go:203] Starting DiscoveryController
I1207 00:44:28.412885  117427 naming_controller.go:284] Starting NamingConditionController
I1207 00:44:28.412903  117427 establishing_controller.go:73] Starting EstablishingController
W1207 00:44:28.442377  117427 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E1207 00:44:28.443422  117427 controller.go:155] Unable to perform initial Kubernetes service initialization: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
W1207 00:44:28.450735  117427 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E1207 00:44:28.451735  117427 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I1207 00:44:28.505470  117427 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1207 00:44:28.512905  117427 cache.go:39] Caches are synced for AvailableConditionController controller
I1207 00:44:28.512958  117427 cache.go:39] Caches are synced for autoregister controller
I1207 00:44:28.513006  117427 controller_utils.go:1034] Caches are synced for crd-autoregister controller
I1207 00:44:29.411402  117427 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I1207 00:44:29.415236  117427 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I1207 00:44:29.415257  117427 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I1207 00:44:29.421825  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1207 00:44:29.424981  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1207 00:44:29.428314  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1207 00:44:29.431468  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I1207 00:44:29.434528  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I1207 00:44:29.438508  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I1207 00:44:29.441295  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1207 00:44:29.444410  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1207 00:44:29.447316  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1207 00:44:29.450211  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1207 00:44:29.453108  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I1207 00:44:29.455786  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1207 00:44:29.458612  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1207 00:44:29.461269  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1207 00:44:29.464018  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1207 00:44:29.466810  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1207 00:44:29.469501  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1207 00:44:29.472471  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 00:44:29.475536  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1207 00:44:29.478400  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1207 00:44:29.481234  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1207 00:44:29.484050  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1207 00:44:29.487063  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 00:44:29.490148  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1207 00:44:29.494751  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1207 00:44:29.501610  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1207 00:44:29.504627  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1207 00:44:29.508318  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 00:44:29.511724  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 00:44:29.515039  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 00:44:29.518164  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 00:44:29.521131  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 00:44:29.524057  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 00:44:29.527377  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 00:44:29.530558  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 00:44:29.533356  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 00:44:29.536756  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 00:44:29.539707  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1207 00:44:29.542992  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1207 00:44:29.545902  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1207 00:44:29.549019  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 00:44:29.552078  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 00:44:29.555264  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 00:44:29.558203  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 00:44:29.561425  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 00:44:29.564396  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1207 00:44:29.567169  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 00:44:29.570108  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1207 00:44:29.573109  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 00:44:29.575997  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 00:44:29.607919  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 00:44:29.665903  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 00:44:29.711294  117427 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 00:44:29.736627  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1207 00:44:29.781473  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1207 00:44:29.811744  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1207 00:44:29.848589  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1207 00:44:29.888662  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 00:44:29.928446  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1207 00:44:29.968509  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1207 00:44:30.008134  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 00:44:30.048574  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1207 00:44:30.089109  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1207 00:44:30.128608  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 00:44:30.168537  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 00:44:30.208032  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 00:44:30.249302  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 00:44:30.288842  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 00:44:30.328909  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 00:44:30.368561  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 00:44:30.408236  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 00:44:30.448302  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 00:44:30.488745  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 00:44:30.528924  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1207 00:44:30.568629  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1207 00:44:30.608722  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1207 00:44:30.648352  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 00:44:30.688595  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 00:44:30.728843  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 00:44:30.768230  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 00:44:30.808070  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 00:44:30.848526  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1207 00:44:30.888922  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 00:44:30.928347  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1207 00:44:30.968680  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 00:44:31.008022  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 00:44:31.048362  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 00:44:31.093082  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 00:44:31.128352  117427 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 00:44:31.166782  117427 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1207 00:44:31.168500  117427 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1207 00:44:31.209612  117427 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 00:44:31.248240  117427 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 00:44:31.288344  117427 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1207 00:44:31.328558  117427 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 00:44:31.368304  117427 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1207 00:44:31.409583  117427 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1207 00:44:31.457312  117427 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1207 00:44:31.459587  117427 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 00:44:31.488735  117427 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1207 00:44:31.528732  117427 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 00:44:31.568560  117427 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 00:44:31.609068  117427 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1207 00:44:31.648950  117427 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W1207 00:44:31.730230  117427 feature_gate.go:211] Setting GA feature gate CSIPersistentVolume=true. It will be removed in a future release.
				from junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-004204.xml

Filter through log files


Show 578 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1207 00:29:34.021] process 203 exited with code 0 after 0.0m
I1207 00:29:34.021] Call:  gcloud config get-value account
I1207 00:29:34.243] process 216 exited with code 0 after 0.0m
I1207 00:29:34.244] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 00:29:34.244] Call:  kubectl get -oyaml pods/0e55998b-f9b7-11e8-92b6-0a580a6c0310
W1207 00:29:34.342] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1207 00:29:34.345] Command failed
I1207 00:29:34.346] process 229 exited with code 1 after 0.0m
E1207 00:29:34.346] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/0e55998b-f9b7-11e8-92b6-0a580a6c0310']' returned non-zero exit status 1
I1207 00:29:34.346] Root: /workspace
I1207 00:29:34.346] cd to /workspace
I1207 00:29:34.346] Checkout: /workspace/k8s.io/kubernetes master:1cd6ccb34458def1347ae96b2e8aacb5338f8e1d,71684:3c055aa4b47232bf7d6b5d5a0901dae239e33c59,71734:7ee0e002707de821f469381af54c5106e6f0c933 to /workspace/k8s.io/kubernetes
I1207 00:29:34.346] Call:  git init k8s.io/kubernetes
... skipping 819 lines ...
W1207 00:37:22.343] I1207 00:37:22.342855   55610 job_controller.go:143] Starting job controller
W1207 00:37:22.343] I1207 00:37:22.342876   55610 controller_utils.go:1027] Waiting for caches to sync for job controller
W1207 00:37:22.343] I1207 00:37:22.343047   55610 controllermanager.go:516] Started "csrapproving"
W1207 00:37:22.343] W1207 00:37:22.343061   55610 controllermanager.go:495] "bootstrapsigner" is disabled
W1207 00:37:22.344] I1207 00:37:22.343204   55610 certificate_controller.go:113] Starting certificate controller
W1207 00:37:22.344] I1207 00:37:22.343230   55610 controller_utils.go:1027] Waiting for caches to sync for certificate controller
W1207 00:37:22.344] W1207 00:37:22.343298   55610 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W1207 00:37:22.344] I1207 00:37:22.343608   55610 garbagecollector.go:133] Starting garbage collector controller
W1207 00:37:22.344] I1207 00:37:22.343624   55610 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 00:37:22.344] I1207 00:37:22.343645   55610 graph_builder.go:308] GraphBuilder running
W1207 00:37:22.345] I1207 00:37:22.343662   55610 controllermanager.go:516] Started "garbagecollector"
W1207 00:37:22.345] I1207 00:37:22.343956   55610 controllermanager.go:516] Started "ttl"
W1207 00:37:22.345] I1207 00:37:22.344054   55610 ttl_controller.go:116] Starting TTL controller
... skipping 5 lines ...
W1207 00:37:22.346] I1207 00:37:22.344887   55610 pv_controller_base.go:271] Starting persistent volume controller
W1207 00:37:22.346] I1207 00:37:22.344986   55610 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller
W1207 00:37:22.346] I1207 00:37:22.345047   55610 controllermanager.go:516] Started "clusterrole-aggregation"
W1207 00:37:22.347] I1207 00:37:22.345339   55610 controllermanager.go:516] Started "endpoint"
W1207 00:37:22.347] W1207 00:37:22.345369   55610 controllermanager.go:508] Skipping "csrsigning"
W1207 00:37:22.347] W1207 00:37:22.345373   55610 controllermanager.go:495] "tokencleaner" is disabled
W1207 00:37:22.347] E1207 00:37:22.345703   55610 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1207 00:37:22.347] W1207 00:37:22.345719   55610 controllermanager.go:508] Skipping "service"
W1207 00:37:22.347] W1207 00:37:22.345726   55610 controllermanager.go:508] Skipping "root-ca-cert-publisher"
W1207 00:37:22.347] I1207 00:37:22.346027   55610 controllermanager.go:516] Started "serviceaccount"
W1207 00:37:22.348] I1207 00:37:22.346430   55610 controllermanager.go:516] Started "deployment"
W1207 00:37:22.349] I1207 00:37:22.348888   55610 controllermanager.go:516] Started "replicaset"
W1207 00:37:22.349] W1207 00:37:22.348919   55610 controllermanager.go:508] Skipping "nodeipam"
... skipping 37 lines ...
W1207 00:37:22.362] I1207 00:37:22.359789   55610 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
W1207 00:37:22.363] I1207 00:37:22.359812   55610 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
W1207 00:37:22.363] I1207 00:37:22.359832   55610 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
W1207 00:37:22.363] I1207 00:37:22.359853   55610 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
W1207 00:37:22.363] I1207 00:37:22.359877   55610 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
W1207 00:37:22.363] I1207 00:37:22.359901   55610 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W1207 00:37:22.364] E1207 00:37:22.359934   55610 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 00:37:22.364] I1207 00:37:22.359944   55610 controllermanager.go:516] Started "resourcequota"
W1207 00:37:22.364] I1207 00:37:22.360034   55610 resource_quota_controller.go:276] Starting resource quota controller
W1207 00:37:22.364] I1207 00:37:22.360054   55610 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W1207 00:37:22.364] I1207 00:37:22.360071   55610 resource_quota_monitor.go:301] QuotaMonitor running
W1207 00:37:22.365] I1207 00:37:22.364997   55610 controllermanager.go:516] Started "namespace"
W1207 00:37:22.365] I1207 00:37:22.365095   55610 namespace_controller.go:186] Starting namespace controller
... skipping 37 lines ...
W1207 00:37:22.451] I1207 00:37:22.450696   55610 controller_utils.go:1034] Caches are synced for service account controller
W1207 00:37:22.452] I1207 00:37:22.451004   55610 controller_utils.go:1034] Caches are synced for ReplicaSet controller
W1207 00:37:22.452] I1207 00:37:22.451187   55610 taint_manager.go:198] Starting NoExecuteTaintManager
W1207 00:37:22.452] I1207 00:37:22.451542   55610 controller_utils.go:1034] Caches are synced for deployment controller
W1207 00:37:22.453] I1207 00:37:22.453019   55610 controller_utils.go:1034] Caches are synced for attach detach controller
W1207 00:37:22.453] I1207 00:37:22.453082   52249 controller.go:608] quota admission added evaluator for: serviceaccounts
W1207 00:37:22.458] E1207 00:37:22.458178   55610 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W1207 00:37:22.462] E1207 00:37:22.462040   55610 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1207 00:37:22.465] I1207 00:37:22.465328   55610 controller_utils.go:1034] Caches are synced for namespace controller
W1207 00:37:22.467] I1207 00:37:22.467212   55610 controller_utils.go:1034] Caches are synced for HPA controller
W1207 00:37:22.467] I1207 00:37:22.467322   55610 controller_utils.go:1034] Caches are synced for ReplicationController controller
W1207 00:37:22.468] I1207 00:37:22.467674   55610 controller_utils.go:1034] Caches are synced for GC controller
W1207 00:37:22.468] I1207 00:37:22.467720   55610 controller_utils.go:1034] Caches are synced for PV protection controller
W1207 00:37:22.469] I1207 00:37:22.468944   55610 controller_utils.go:1034] Caches are synced for stateful set controller
... skipping 3 lines ...
W1207 00:37:22.760] I1207 00:37:22.760353   55610 controller_utils.go:1034] Caches are synced for resource quota controller
I1207 00:37:22.989] +++ [1207 00:37:22] On try 3, controller-manager: ok
I1207 00:37:23.166] node/127.0.0.1 created
I1207 00:37:23.177] +++ [1207 00:37:23] Checking kubectl version
I1207 00:37:23.238] Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.897+8a167afe7042db", GitCommit:"8a167afe7042dbbcde1c65b2cd0e20a8f5cdf423", GitTreeState:"clean", BuildDate:"2018-12-07T00:35:42Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
I1207 00:37:23.238] Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.897+8a167afe7042db", GitCommit:"8a167afe7042dbbcde1c65b2cd0e20a8f5cdf423", GitTreeState:"clean", BuildDate:"2018-12-07T00:35:58Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
W1207 00:37:23.339] W1207 00:37:23.167560   55610 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1207 00:37:23.521] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
I1207 00:37:23.622] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
I1207 00:37:23.622] kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   30s
I1207 00:37:23.622] Recording: run_kubectl_version_tests
I1207 00:37:23.622] Running command: run_kubectl_version_tests
I1207 00:37:23.622] 
... skipping 10 lines ...
I1207 00:37:23.684]   "buildDate": "2018-12-07T00:35:58Z",
I1207 00:37:23.684]   "goVersion": "go1.11.1",
I1207 00:37:23.684]   "compiler": "gc",
I1207 00:37:23.685]   "platform": "linux/amd64"
I1207 00:37:23.817] }+++ [1207 00:37:23] Testing kubectl version: check client only output matches expected output
W1207 00:37:23.918] I1207 00:37:23.838529   55610 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 00:37:23.918] E1207 00:37:23.856741   55610 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 00:37:23.939] I1207 00:37:23.938858   55610 controller_utils.go:1034] Caches are synced for garbage collector controller
W1207 00:37:23.944] I1207 00:37:23.943926   55610 controller_utils.go:1034] Caches are synced for garbage collector controller
W1207 00:37:23.944] I1207 00:37:23.943963   55610 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1207 00:37:24.045] Successful: the flag '--client' shows correct client info
I1207 00:37:24.045] (BSuccessful: the flag '--client' correctly has no server version info
I1207 00:37:24.045] (B+++ [1207 00:37:23] Testing kubectl version: verify json output
... skipping 54 lines ...
I1207 00:37:27.033] +++ working dir: /go/src/k8s.io/kubernetes
I1207 00:37:27.035] +++ command: run_RESTMapper_evaluation_tests
I1207 00:37:27.048] +++ [1207 00:37:27] Creating namespace namespace-1544143047-19313
I1207 00:37:27.119] namespace/namespace-1544143047-19313 created
I1207 00:37:27.179] Context "test" modified.
I1207 00:37:27.185] +++ [1207 00:37:27] Testing RESTMapper
I1207 00:37:27.293] +++ [1207 00:37:27] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1207 00:37:27.307] +++ exit code: 0
I1207 00:37:27.417] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1207 00:37:27.418] bindings                                                                      true         Binding
I1207 00:37:27.418] componentstatuses                 cs                                          false        ComponentStatus
I1207 00:37:27.418] configmaps                        cm                                          true         ConfigMap
I1207 00:37:27.418] endpoints                         ep                                          true         Endpoints
... skipping 609 lines ...
I1207 00:37:44.732] (Bpoddisruptionbudget.policy/test-pdb-3 created
I1207 00:37:44.817] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1207 00:37:44.880] (Bpoddisruptionbudget.policy/test-pdb-4 created
I1207 00:37:44.966] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1207 00:37:45.111] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:37:45.268] (Bpod/env-test-pod created
W1207 00:37:45.368] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1207 00:37:45.369] error: setting 'all' parameter but found a non empty selector. 
W1207 00:37:45.369] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 00:37:45.369] I1207 00:37:44.434558   52249 controller.go:608] quota admission added evaluator for: poddisruptionbudgets.policy
W1207 00:37:45.369] error: min-available and max-unavailable cannot be both specified
I1207 00:37:45.469] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I1207 00:37:45.470] Name:               env-test-pod
I1207 00:37:45.470] Namespace:          test-kubectl-describe-pod
I1207 00:37:45.470] Priority:           0
I1207 00:37:45.470] PriorityClassName:  <none>
I1207 00:37:45.470] Node:               <none>
... skipping 145 lines ...
W1207 00:37:56.682] I1207 00:37:56.099087   55610 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W1207 00:37:56.683] I1207 00:37:56.254567   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143071-22729", Name:"modified", UID:"56ed6e3e-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-dvcbt
I1207 00:37:56.804] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:37:56.942] (Bpod/valid-pod created
I1207 00:37:57.034] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 00:37:57.173] (BSuccessful
I1207 00:37:57.174] message:Error from server: cannot restore map from string
I1207 00:37:57.174] has:cannot restore map from string
I1207 00:37:57.254] Successful
I1207 00:37:57.254] message:pod/valid-pod patched (no change)
I1207 00:37:57.254] has:patched (no change)
I1207 00:37:57.330] pod/valid-pod patched
I1207 00:37:57.418] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I1207 00:37:57.894] (Bpod/valid-pod patched
I1207 00:37:57.982] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1207 00:37:58.049] (Bpod/valid-pod patched
I1207 00:37:58.138] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1207 00:37:58.286] (Bpod/valid-pod patched
I1207 00:37:58.379] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 00:37:58.538] (B+++ [1207 00:37:58] "kubectl patch with resourceVersion 486" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W1207 00:37:58.639] E1207 00:37:57.166190   52249 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I1207 00:37:58.756] pod "valid-pod" deleted
I1207 00:37:58.766] pod/valid-pod replaced
I1207 00:37:58.856] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1207 00:37:58.994] (BSuccessful
I1207 00:37:58.995] message:error: --grace-period must have --force specified
I1207 00:37:58.995] has:\-\-grace-period must have \-\-force specified
I1207 00:37:59.134] Successful
I1207 00:37:59.134] message:error: --timeout must have --force specified
I1207 00:37:59.134] has:\-\-timeout must have \-\-force specified
I1207 00:37:59.280] node/node-v1-test created
W1207 00:37:59.381] W1207 00:37:59.280529   55610 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1207 00:37:59.481] node/node-v1-test replaced
I1207 00:37:59.506] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1207 00:37:59.579] (Bnode "node-v1-test" deleted
I1207 00:37:59.668] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 00:37:59.907] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I1207 00:38:00.720] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 57 lines ...
I1207 00:38:04.388] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:04.526] (Bpod/test-pod created
W1207 00:38:04.627] Edit cancelled, no changes made.
W1207 00:38:04.627] Edit cancelled, no changes made.
W1207 00:38:04.627] Edit cancelled, no changes made.
W1207 00:38:04.627] Edit cancelled, no changes made.
W1207 00:38:04.628] error: 'name' already has a value (valid-pod), and --overwrite is false
W1207 00:38:04.628] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 00:38:04.628] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 00:38:04.728] pod "test-pod" deleted
I1207 00:38:04.729] +++ [1207 00:38:04] Creating namespace namespace-1544143084-23000
I1207 00:38:04.754] namespace/namespace-1544143084-23000 created
I1207 00:38:04.822] Context "test" modified.
... skipping 41 lines ...
I1207 00:38:07.893] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1207 00:38:07.895] +++ working dir: /go/src/k8s.io/kubernetes
I1207 00:38:07.897] +++ command: run_kubectl_create_error_tests
I1207 00:38:07.911] +++ [1207 00:38:07] Creating namespace namespace-1544143087-11649
I1207 00:38:07.982] namespace/namespace-1544143087-11649 created
I1207 00:38:08.054] Context "test" modified.
I1207 00:38:08.062] +++ [1207 00:38:08] Testing kubectl create with error
W1207 00:38:08.162] Error: required flag(s) "filename" not set
W1207 00:38:08.163] 
W1207 00:38:08.163] 
W1207 00:38:08.163] Examples:
W1207 00:38:08.163]   # Create a pod using the data in pod.json.
W1207 00:38:08.163]   kubectl create -f ./pod.json
W1207 00:38:08.163]   
... skipping 38 lines ...
W1207 00:38:08.169]   kubectl create -f FILENAME [options]
W1207 00:38:08.169] 
W1207 00:38:08.169] Use "kubectl <command> --help" for more information about a given command.
W1207 00:38:08.170] Use "kubectl options" for a list of global command-line options (applies to all commands).
W1207 00:38:08.170] 
W1207 00:38:08.170] required flag(s) "filename" not set
I1207 00:38:08.290] +++ [1207 00:38:08] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1207 00:38:08.391] kubectl convert is DEPRECATED and will be removed in a future version.
W1207 00:38:08.391] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 00:38:08.491] +++ exit code: 0
I1207 00:38:08.501] Recording: run_kubectl_apply_tests
I1207 00:38:08.501] Running command: run_kubectl_apply_tests
I1207 00:38:08.520] 
... skipping 17 lines ...
I1207 00:38:09.638] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I1207 00:38:10.475] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I1207 00:38:10.575] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:10.731] (Bpod/selector-test-pod created
I1207 00:38:10.829] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 00:38:10.919] (BSuccessful
I1207 00:38:10.919] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 00:38:10.919] has:pods "selector-test-pod-dont-apply" not found
I1207 00:38:10.998] pod "selector-test-pod" deleted
I1207 00:38:11.096] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:11.314] (Bpod/test-pod created (server dry run)
I1207 00:38:11.414] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:11.572] (Bpod/test-pod created
... skipping 8 lines ...
W1207 00:38:12.318] I1207 00:38:12.317385   52249 clientconn.go:551] parsed scheme: ""
W1207 00:38:12.318] I1207 00:38:12.317433   52249 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 00:38:12.318] I1207 00:38:12.317484   52249 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 00:38:12.319] I1207 00:38:12.317534   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:38:12.319] I1207 00:38:12.318057   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:38:12.406] I1207 00:38:12.405351   52249 controller.go:608] quota admission added evaluator for: resources.mygroup.example.com
W1207 00:38:12.500] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1207 00:38:12.600] kind.mygroup.example.com/myobj created (server dry run)
I1207 00:38:12.601] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1207 00:38:12.694] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:12.845] (Bpod/a created
I1207 00:38:14.352] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I1207 00:38:14.440] (BSuccessful
I1207 00:38:14.440] message:Error from server (NotFound): pods "b" not found
I1207 00:38:14.441] has:pods "b" not found
I1207 00:38:14.602] pod/b created
I1207 00:38:14.619] pod/a pruned
I1207 00:38:16.298] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I1207 00:38:16.377] (BSuccessful
I1207 00:38:16.377] message:Error from server (NotFound): pods "a" not found
I1207 00:38:16.377] has:pods "a" not found
I1207 00:38:16.448] pod "b" deleted
I1207 00:38:16.538] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:16.695] (Bpod/a created
I1207 00:38:16.789] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I1207 00:38:16.871] (BSuccessful
I1207 00:38:16.871] message:Error from server (NotFound): pods "b" not found
I1207 00:38:16.871] has:pods "b" not found
I1207 00:38:17.014] pod/b created
I1207 00:38:17.098] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I1207 00:38:17.179] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I1207 00:38:17.253] (Bpod "a" deleted
I1207 00:38:17.257] pod "b" deleted
I1207 00:38:17.400] Successful
I1207 00:38:17.401] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I1207 00:38:17.401] has:all resources selected for prune without explicitly passing --all
I1207 00:38:17.539] pod/a created
I1207 00:38:17.546] pod/b created
I1207 00:38:17.553] service/prune-svc created
I1207 00:38:19.045] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I1207 00:38:19.124] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 127 lines ...
I1207 00:38:31.033] Context "test" modified.
I1207 00:38:31.039] +++ [1207 00:38:31] Testing kubectl create filter
I1207 00:38:31.120] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:31.258] (Bpod/selector-test-pod created
I1207 00:38:31.343] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 00:38:31.418] (BSuccessful
I1207 00:38:31.418] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 00:38:31.418] has:pods "selector-test-pod-dont-apply" not found
I1207 00:38:31.491] pod "selector-test-pod" deleted
I1207 00:38:31.510] +++ exit code: 0
I1207 00:38:31.545] Recording: run_kubectl_apply_deployments_tests
I1207 00:38:31.545] Running command: run_kubectl_apply_deployments_tests
I1207 00:38:31.565] 
... skipping 28 lines ...
I1207 00:38:33.394] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:33.471] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:33.550] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:33.687] (Bdeployment.extensions/nginx created
I1207 00:38:33.771] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I1207 00:38:37.947] (BSuccessful
I1207 00:38:37.947] message:Error from server (Conflict): error when applying patch:
I1207 00:38:37.948] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544143111-20185\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1207 00:38:37.948] to:
I1207 00:38:37.948] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I1207 00:38:37.948] Name: "nginx", Namespace: "namespace-1544143111-20185"
I1207 00:38:37.949] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1544143111-20185/deployments/nginx" "generation":'\x01' "creationTimestamp":"2018-12-07T00:38:33Z" "name":"nginx" "namespace":"namespace-1544143111-20185" "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544143111-20185\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "uid":"6d3d7e72-f9b8-11e8-843b-0242ac110002" "resourceVersion":"705"] "spec":map["revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']]] "status":map["conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2018-12-07T00:38:33Z" "lastTransitionTime":"2018-12-07T00:38:33Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]] "observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03']]}
I1207 00:38:37.950] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I1207 00:38:37.950] has:Error from server (Conflict)
W1207 00:38:38.050] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1207 00:38:38.050] I1207 00:38:29.156792   52249 controller.go:608] quota admission added evaluator for: jobs.batch
W1207 00:38:38.051] I1207 00:38:29.168814   55610 event.go:221] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1544143108-29184", Name:"pi", UID:"6a8a9592-f9b8-11e8-843b-0242ac110002", APIVersion:"batch/v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-hss2p
W1207 00:38:38.051] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1207 00:38:38.051] I1207 00:38:29.678588   52249 controller.go:608] quota admission added evaluator for: deployments.apps
W1207 00:38:38.051] I1207 00:38:29.684159   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143108-29184", Name:"nginx-extensions", UID:"6ada36af-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-extensions-6fb4b564f5 to 1
... skipping 103 lines ...
I1207 00:38:49.450] +++ [1207 00:38:49] Creating namespace namespace-1544143129-6531
I1207 00:38:49.512] namespace/namespace-1544143129-6531 created
I1207 00:38:49.571] Context "test" modified.
I1207 00:38:49.577] +++ [1207 00:38:49] Testing kubectl get
I1207 00:38:49.653] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:49.727] (BSuccessful
I1207 00:38:49.727] message:Error from server (NotFound): pods "abc" not found
I1207 00:38:49.727] has:pods "abc" not found
I1207 00:38:49.804] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:49.876] (BSuccessful
I1207 00:38:49.876] message:Error from server (NotFound): pods "abc" not found
I1207 00:38:49.876] has:pods "abc" not found
I1207 00:38:49.951] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:50.023] (BSuccessful
I1207 00:38:50.023] message:{
I1207 00:38:50.023]     "apiVersion": "v1",
I1207 00:38:50.023]     "items": [],
... skipping 23 lines ...
I1207 00:38:50.304] has not:No resources found
I1207 00:38:50.377] Successful
I1207 00:38:50.377] message:NAME
I1207 00:38:50.377] has not:No resources found
I1207 00:38:50.451] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:50.547] (BSuccessful
I1207 00:38:50.548] message:error: the server doesn't have a resource type "foobar"
I1207 00:38:50.548] has not:No resources found
I1207 00:38:50.619] Successful
I1207 00:38:50.619] message:No resources found.
I1207 00:38:50.619] has:No resources found
I1207 00:38:50.688] Successful
I1207 00:38:50.688] message:
I1207 00:38:50.688] has not:No resources found
I1207 00:38:50.760] Successful
I1207 00:38:50.760] message:No resources found.
I1207 00:38:50.760] has:No resources found
I1207 00:38:50.835] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:50.907] (BSuccessful
I1207 00:38:50.907] message:Error from server (NotFound): pods "abc" not found
I1207 00:38:50.907] has:pods "abc" not found
I1207 00:38:50.909] FAIL!
I1207 00:38:50.909] message:Error from server (NotFound): pods "abc" not found
I1207 00:38:50.909] has not:List
I1207 00:38:50.909] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1207 00:38:51.008] Successful
I1207 00:38:51.008] message:I1207 00:38:50.962473   67898 loader.go:359] Config loaded from file /tmp/tmp.j3fTvrR3D1/.kube/config
I1207 00:38:51.008] I1207 00:38:50.962940   67898 loader.go:359] Config loaded from file /tmp/tmp.j3fTvrR3D1/.kube/config
I1207 00:38:51.009] I1207 00:38:50.964106   67898 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
... skipping 995 lines ...
I1207 00:38:54.281] }
I1207 00:38:54.357] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 00:38:54.594] (B<no value>Successful
I1207 00:38:54.595] message:valid-pod:
I1207 00:38:54.595] has:valid-pod:
I1207 00:38:54.669] Successful
I1207 00:38:54.669] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1207 00:38:54.669] 	template was:
I1207 00:38:54.669] 		{.missing}
I1207 00:38:54.669] 	object given to jsonpath engine was:
I1207 00:38:54.670] 		map[string]interface {}{"metadata":map[string]interface {}{"name":"valid-pod", "namespace":"namespace-1544143133-30749", "selfLink":"/api/v1/namespaces/namespace-1544143133-30749/pods/valid-pod", "uid":"7978a67a-f9b8-11e8-843b-0242ac110002", "resourceVersion":"801", "creationTimestamp":"2018-12-07T00:38:54Z", "labels":map[string]interface {}{"name":"valid-pod"}}, "spec":map[string]interface {}{"priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler"}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod", "apiVersion":"v1"}
I1207 00:38:54.670] has:missing is not found
I1207 00:38:54.743] Successful
I1207 00:38:54.743] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1207 00:38:54.743] 	template was:
I1207 00:38:54.743] 		{{.missing}}
I1207 00:38:54.743] 	raw data was:
I1207 00:38:54.744] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2018-12-07T00:38:54Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1544143133-30749","resourceVersion":"801","selfLink":"/api/v1/namespaces/namespace-1544143133-30749/pods/valid-pod","uid":"7978a67a-f9b8-11e8-843b-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1207 00:38:54.744] 	object given to template engine was:
I1207 00:38:54.744] 		map[spec:map[containers:[map[name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed] apiVersion:v1 kind:Pod metadata:map[name:valid-pod namespace:namespace-1544143133-30749 resourceVersion:801 selfLink:/api/v1/namespaces/namespace-1544143133-30749/pods/valid-pod uid:7978a67a-f9b8-11e8-843b-0242ac110002 creationTimestamp:2018-12-07T00:38:54Z labels:map[name:valid-pod]]]
I1207 00:38:54.745] has:map has no entry for key "missing"
W1207 00:38:54.845] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W1207 00:38:55.814] E1207 00:38:55.813952   68289 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I1207 00:38:55.915] Successful
I1207 00:38:55.915] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 00:38:55.915] valid-pod   0/1     Pending   0          0s
I1207 00:38:55.915] has:STATUS
I1207 00:38:55.915] Successful
... skipping 80 lines ...
I1207 00:38:58.077]   terminationGracePeriodSeconds: 30
I1207 00:38:58.077] status:
I1207 00:38:58.077]   phase: Pending
I1207 00:38:58.077]   qosClass: Guaranteed
I1207 00:38:58.077] has:name: valid-pod
I1207 00:38:58.077] Successful
I1207 00:38:58.077] message:Error from server (NotFound): pods "invalid-pod" not found
I1207 00:38:58.077] has:"invalid-pod" not found
I1207 00:38:58.123] pod "valid-pod" deleted
I1207 00:38:58.206] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:38:58.335] (Bpod/redis-master created
I1207 00:38:58.339] pod/valid-pod created
I1207 00:38:58.427] Successful
... skipping 305 lines ...
I1207 00:39:02.212] Running command: run_create_secret_tests
I1207 00:39:02.232] 
I1207 00:39:02.233] +++ Running case: test-cmd.run_create_secret_tests 
I1207 00:39:02.236] +++ working dir: /go/src/k8s.io/kubernetes
I1207 00:39:02.238] +++ command: run_create_secret_tests
I1207 00:39:02.326] Successful
I1207 00:39:02.326] message:Error from server (NotFound): secrets "mysecret" not found
I1207 00:39:02.326] has:secrets "mysecret" not found
W1207 00:39:02.427] I1207 00:39:01.452235   52249 clientconn.go:551] parsed scheme: ""
W1207 00:39:02.427] I1207 00:39:01.452304   52249 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 00:39:02.427] I1207 00:39:01.452364   52249 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 00:39:02.428] I1207 00:39:01.452472   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:39:02.428] I1207 00:39:01.452884   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:39:02.428] No resources found.
W1207 00:39:02.428] No resources found.
I1207 00:39:02.528] Successful
I1207 00:39:02.529] message:Error from server (NotFound): secrets "mysecret" not found
I1207 00:39:02.529] has:secrets "mysecret" not found
I1207 00:39:02.529] Successful
I1207 00:39:02.529] message:user-specified
I1207 00:39:02.529] has:user-specified
I1207 00:39:02.535] Successful
I1207 00:39:02.607] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"7e7a32c1-f9b8-11e8-843b-0242ac110002","resourceVersion":"874","creationTimestamp":"2018-12-07T00:39:02Z"}}
... skipping 80 lines ...
I1207 00:39:04.456] has:Timeout exceeded while reading body
I1207 00:39:04.530] Successful
I1207 00:39:04.530] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 00:39:04.530] valid-pod   0/1     Pending   0          1s
I1207 00:39:04.531] has:valid-pod
I1207 00:39:04.593] Successful
I1207 00:39:04.593] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1207 00:39:04.593] has:Invalid timeout value
I1207 00:39:04.664] pod "valid-pod" deleted
I1207 00:39:04.684] +++ exit code: 0
I1207 00:39:04.726] Recording: run_crd_tests
I1207 00:39:04.727] Running command: run_crd_tests
I1207 00:39:04.747] 
... skipping 166 lines ...
I1207 00:39:08.606] foo.company.com/test patched
I1207 00:39:08.691] crd.sh:237: Successful get foos/test {{.patched}}: value1
I1207 00:39:08.764] (Bfoo.company.com/test patched
I1207 00:39:08.846] crd.sh:239: Successful get foos/test {{.patched}}: value2
I1207 00:39:08.916] (Bfoo.company.com/test patched
I1207 00:39:09.002] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I1207 00:39:09.141] (B+++ [1207 00:39:09] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1207 00:39:09.195] {
I1207 00:39:09.195]     "apiVersion": "company.com/v1",
I1207 00:39:09.195]     "kind": "Foo",
I1207 00:39:09.195]     "metadata": {
I1207 00:39:09.195]         "annotations": {
I1207 00:39:09.195]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W1207 00:39:10.585] I1207 00:39:07.091128   52249 controller.go:608] quota admission added evaluator for: foos.company.com
W1207 00:39:10.586] I1207 00:39:10.245267   52249 controller.go:608] quota admission added evaluator for: bars.company.com
W1207 00:39:10.586] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 70824 Killed                  while [ ${tries} -lt 10 ]; do
W1207 00:39:10.586]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W1207 00:39:10.586] done
W1207 00:39:10.586] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 70823 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W1207 00:39:23.976] E1207 00:39:23.975109   55610 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos"]
W1207 00:39:24.166] I1207 00:39:24.166395   55610 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 00:39:24.168] I1207 00:39:24.167661   52249 clientconn.go:551] parsed scheme: ""
W1207 00:39:24.168] I1207 00:39:24.167694   52249 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 00:39:24.168] I1207 00:39:24.167782   52249 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 00:39:24.168] I1207 00:39:24.167858   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:39:24.169] I1207 00:39:24.168738   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 81 lines ...
I1207 00:39:35.835] +++ [1207 00:39:35] Testing cmd with image
I1207 00:39:35.917] Successful
I1207 00:39:35.917] message:deployment.apps/test1 created
I1207 00:39:35.917] has:deployment.apps/test1 created
I1207 00:39:35.986] deployment.extensions "test1" deleted
I1207 00:39:36.058] Successful
I1207 00:39:36.059] message:error: Invalid image name "InvalidImageName": invalid reference format
I1207 00:39:36.059] has:error: Invalid image name "InvalidImageName": invalid reference format
I1207 00:39:36.071] +++ exit code: 0
I1207 00:39:36.106] Recording: run_recursive_resources_tests
I1207 00:39:36.107] Running command: run_recursive_resources_tests
I1207 00:39:36.124] 
I1207 00:39:36.126] +++ Running case: test-cmd.run_recursive_resources_tests 
I1207 00:39:36.127] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I1207 00:39:36.266] Context "test" modified.
I1207 00:39:36.348] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:39:36.582] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:36.585] (BSuccessful
I1207 00:39:36.585] message:pod/busybox0 created
I1207 00:39:36.585] pod/busybox1 created
I1207 00:39:36.585] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 00:39:36.585] has:error validating data: kind not set
I1207 00:39:36.670] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:36.832] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1207 00:39:36.834] (BSuccessful
I1207 00:39:36.834] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:36.834] has:Object 'Kind' is missing
I1207 00:39:36.915] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:37.158] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 00:39:37.161] (BSuccessful
I1207 00:39:37.161] message:pod/busybox0 replaced
I1207 00:39:37.161] pod/busybox1 replaced
I1207 00:39:37.161] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 00:39:37.161] has:error validating data: kind not set
I1207 00:39:37.242] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:37.324] (BSuccessful
I1207 00:39:37.324] message:Name:               busybox0
I1207 00:39:37.324] Namespace:          namespace-1544143176-22500
I1207 00:39:37.324] Priority:           0
I1207 00:39:37.324] PriorityClassName:  <none>
... skipping 159 lines ...
I1207 00:39:37.340] has:Object 'Kind' is missing
I1207 00:39:37.407] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:37.565] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1207 00:39:37.567] (BSuccessful
I1207 00:39:37.567] message:pod/busybox0 annotated
I1207 00:39:37.567] pod/busybox1 annotated
I1207 00:39:37.567] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:37.568] has:Object 'Kind' is missing
I1207 00:39:37.644] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:37.862] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 00:39:37.864] (BSuccessful
I1207 00:39:37.864] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 00:39:37.864] pod/busybox0 configured
I1207 00:39:37.864] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 00:39:37.864] pod/busybox1 configured
I1207 00:39:37.865] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 00:39:37.865] has:error validating data: kind not set
I1207 00:39:37.940] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:39:38.064] (Bdeployment.extensions/nginx created
I1207 00:39:38.152] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1207 00:39:38.230] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 00:39:38.371] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I1207 00:39:38.373] (BSuccessful
... skipping 42 lines ...
I1207 00:39:38.445] deployment.extensions "nginx" deleted
I1207 00:39:38.527] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:38.669] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:38.671] (BSuccessful
I1207 00:39:38.671] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1207 00:39:38.671] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 00:39:38.671] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:38.672] has:Object 'Kind' is missing
I1207 00:39:38.747] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:38.818] (BSuccessful
I1207 00:39:38.818] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:38.819] has:busybox0:busybox1:
I1207 00:39:38.820] Successful
I1207 00:39:38.820] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:38.820] has:Object 'Kind' is missing
I1207 00:39:38.897] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:38.973] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:39.051] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1207 00:39:39.053] (BSuccessful
I1207 00:39:39.053] message:pod/busybox0 labeled
I1207 00:39:39.053] pod/busybox1 labeled
I1207 00:39:39.054] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:39.054] has:Object 'Kind' is missing
I1207 00:39:39.131] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:39.203] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:39.288] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1207 00:39:39.290] (BSuccessful
I1207 00:39:39.291] message:pod/busybox0 patched
I1207 00:39:39.291] pod/busybox1 patched
I1207 00:39:39.291] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:39.291] has:Object 'Kind' is missing
I1207 00:39:39.371] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:39.524] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:39:39.526] (BSuccessful
I1207 00:39:39.526] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 00:39:39.526] pod "busybox0" force deleted
I1207 00:39:39.526] pod "busybox1" force deleted
I1207 00:39:39.527] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 00:39:39.527] has:Object 'Kind' is missing
I1207 00:39:39.602] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:39:39.727] (Breplicationcontroller/busybox0 created
I1207 00:39:39.730] replicationcontroller/busybox1 created
I1207 00:39:39.815] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:39.890] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:39.967] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 00:39:40.040] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 00:39:40.198] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 00:39:40.274] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 00:39:40.275] (BSuccessful
I1207 00:39:40.275] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1207 00:39:40.276] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1207 00:39:40.276] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:40.276] has:Object 'Kind' is missing
I1207 00:39:40.341] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1207 00:39:40.414] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1207 00:39:40.498] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:40.579] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 00:39:40.657] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 00:39:40.816] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 00:39:40.894] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 00:39:40.896] (BSuccessful
I1207 00:39:40.896] message:service/busybox0 exposed
I1207 00:39:40.896] service/busybox1 exposed
I1207 00:39:40.897] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:40.897] has:Object 'Kind' is missing
I1207 00:39:40.978] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:41.058] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 00:39:41.136] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 00:39:41.311] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I1207 00:39:41.389] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I1207 00:39:41.391] (BSuccessful
I1207 00:39:41.391] message:replicationcontroller/busybox0 scaled
I1207 00:39:41.391] replicationcontroller/busybox1 scaled
I1207 00:39:41.392] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:41.392] has:Object 'Kind' is missing
I1207 00:39:41.472] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:41.633] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:39:41.636] (BSuccessful
I1207 00:39:41.636] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 00:39:41.636] replicationcontroller "busybox0" force deleted
I1207 00:39:41.636] replicationcontroller "busybox1" force deleted
I1207 00:39:41.636] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:41.636] has:Object 'Kind' is missing
I1207 00:39:41.715] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:39:41.852] (Bdeployment.extensions/nginx1-deployment created
I1207 00:39:41.855] deployment.extensions/nginx0-deployment created
I1207 00:39:41.949] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1207 00:39:42.030] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 00:39:42.212] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 00:39:42.214] (BSuccessful
I1207 00:39:42.214] message:deployment.extensions/nginx1-deployment skipped rollback (current template already matches revision 1)
I1207 00:39:42.214] deployment.extensions/nginx0-deployment skipped rollback (current template already matches revision 1)
I1207 00:39:42.215] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 00:39:42.215] has:Object 'Kind' is missing
I1207 00:39:42.295] deployment.extensions/nginx1-deployment paused
I1207 00:39:42.298] deployment.extensions/nginx0-deployment paused
I1207 00:39:42.393] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1207 00:39:42.396] (BSuccessful
I1207 00:39:42.396] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I1207 00:39:42.678] 1         <none>
I1207 00:39:42.678] 
I1207 00:39:42.678] deployment.extensions/nginx0-deployment 
I1207 00:39:42.678] REVISION  CHANGE-CAUSE
I1207 00:39:42.678] 1         <none>
I1207 00:39:42.678] 
I1207 00:39:42.679] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 00:39:42.679] has:nginx0-deployment
I1207 00:39:42.679] Successful
I1207 00:39:42.679] message:deployment.extensions/nginx1-deployment 
I1207 00:39:42.679] REVISION  CHANGE-CAUSE
I1207 00:39:42.680] 1         <none>
I1207 00:39:42.680] 
I1207 00:39:42.680] deployment.extensions/nginx0-deployment 
I1207 00:39:42.680] REVISION  CHANGE-CAUSE
I1207 00:39:42.680] 1         <none>
I1207 00:39:42.680] 
I1207 00:39:42.680] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 00:39:42.680] has:nginx1-deployment
I1207 00:39:42.682] Successful
I1207 00:39:42.682] message:deployment.extensions/nginx1-deployment 
I1207 00:39:42.682] REVISION  CHANGE-CAUSE
I1207 00:39:42.682] 1         <none>
I1207 00:39:42.682] 
I1207 00:39:42.682] deployment.extensions/nginx0-deployment 
I1207 00:39:42.682] REVISION  CHANGE-CAUSE
I1207 00:39:42.682] 1         <none>
I1207 00:39:42.682] 
I1207 00:39:42.683] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 00:39:42.683] has:Object 'Kind' is missing
I1207 00:39:42.753] deployment.extensions "nginx1-deployment" force deleted
I1207 00:39:42.757] deployment.extensions "nginx0-deployment" force deleted
W1207 00:39:42.858] Error from server (NotFound): namespaces "non-native-resources" not found
W1207 00:39:42.858] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1207 00:39:42.858] I1207 00:39:35.906769   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143175-21531", Name:"test1", UID:"9252fcfd-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"987", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W1207 00:39:42.859] I1207 00:39:35.910770   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143175-21531", Name:"test1-fb488bd5d", UID:"925382f0-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-zm9z5
W1207 00:39:42.859] I1207 00:39:38.070074   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143176-22500", Name:"nginx", UID:"939cb4e2-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W1207 00:39:42.859] I1207 00:39:38.072853   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143176-22500", Name:"nginx-6f6bb85d9c", UID:"939d99bf-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-rt9b8
W1207 00:39:42.859] I1207 00:39:38.074198   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143176-22500", Name:"nginx-6f6bb85d9c", UID:"939d99bf-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-xgcv4
W1207 00:39:42.860] I1207 00:39:38.074794   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143176-22500", Name:"nginx-6f6bb85d9c", UID:"939d99bf-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-vkphz
W1207 00:39:42.860] kubectl convert is DEPRECATED and will be removed in a future version.
W1207 00:39:42.860] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1207 00:39:42.860] I1207 00:39:39.730266   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143176-22500", Name:"busybox0", UID:"949a81f6-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1043", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-27c8d
W1207 00:39:42.860] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 00:39:42.860] I1207 00:39:39.732801   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143176-22500", Name:"busybox1", UID:"949b206e-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1045", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-bjmln
W1207 00:39:42.861] I1207 00:39:40.086805   55610 namespace_controller.go:171] Namespace has been deleted non-native-resources
W1207 00:39:42.861] I1207 00:39:41.223214   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143176-22500", Name:"busybox0", UID:"949a81f6-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1064", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-rzjhh
W1207 00:39:42.861] I1207 00:39:41.230472   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143176-22500", Name:"busybox1", UID:"949b206e-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1069", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-2jm2d
W1207 00:39:42.861] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 00:39:42.861] I1207 00:39:41.856308   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143176-22500", Name:"nginx1-deployment", UID:"95deaa55-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1084", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W1207 00:39:42.862] I1207 00:39:41.858631   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143176-22500", Name:"nginx1-deployment-75f6fc6747", UID:"95df3942-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1085", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-ksv7b
W1207 00:39:42.862] I1207 00:39:41.858787   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143176-22500", Name:"nginx0-deployment", UID:"95df3f74-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W1207 00:39:42.862] I1207 00:39:41.867090   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143176-22500", Name:"nginx1-deployment-75f6fc6747", UID:"95df3942-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1085", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-xndf4
W1207 00:39:42.862] I1207 00:39:41.867484   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143176-22500", Name:"nginx0-deployment-b6bb4ccbb", UID:"95dfb871-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-hckp7
W1207 00:39:42.863] I1207 00:39:41.870901   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143176-22500", Name:"nginx0-deployment-b6bb4ccbb", UID:"95dfb871-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-79tss
W1207 00:39:42.863] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 00:39:42.863] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 00:39:43.842] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:39:43.975] (Breplicationcontroller/busybox0 created
I1207 00:39:43.980] replicationcontroller/busybox1 created
I1207 00:39:44.069] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 00:39:44.152] (BSuccessful
I1207 00:39:44.152] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I1207 00:39:44.154] message:no rollbacker has been implemented for "ReplicationController"
I1207 00:39:44.154] no rollbacker has been implemented for "ReplicationController"
I1207 00:39:44.155] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:44.155] has:Object 'Kind' is missing
I1207 00:39:44.234] Successful
I1207 00:39:44.235] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:44.235] error: replicationcontrollers "busybox0" pausing is not supported
I1207 00:39:44.235] error: replicationcontrollers "busybox1" pausing is not supported
I1207 00:39:44.235] has:Object 'Kind' is missing
I1207 00:39:44.236] Successful
I1207 00:39:44.236] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:44.237] error: replicationcontrollers "busybox0" pausing is not supported
I1207 00:39:44.237] error: replicationcontrollers "busybox1" pausing is not supported
I1207 00:39:44.237] has:replicationcontrollers "busybox0" pausing is not supported
I1207 00:39:44.238] Successful
I1207 00:39:44.238] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:44.238] error: replicationcontrollers "busybox0" pausing is not supported
I1207 00:39:44.238] error: replicationcontrollers "busybox1" pausing is not supported
I1207 00:39:44.239] has:replicationcontrollers "busybox1" pausing is not supported
I1207 00:39:44.317] Successful
I1207 00:39:44.317] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:44.317] error: replicationcontrollers "busybox0" resuming is not supported
I1207 00:39:44.317] error: replicationcontrollers "busybox1" resuming is not supported
I1207 00:39:44.317] has:Object 'Kind' is missing
I1207 00:39:44.318] Successful
I1207 00:39:44.319] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:44.319] error: replicationcontrollers "busybox0" resuming is not supported
I1207 00:39:44.319] error: replicationcontrollers "busybox1" resuming is not supported
I1207 00:39:44.319] has:replicationcontrollers "busybox0" resuming is not supported
I1207 00:39:44.320] Successful
I1207 00:39:44.321] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:44.321] error: replicationcontrollers "busybox0" resuming is not supported
I1207 00:39:44.321] error: replicationcontrollers "busybox1" resuming is not supported
I1207 00:39:44.321] has:replicationcontrollers "busybox0" resuming is not supported
I1207 00:39:44.386] replicationcontroller "busybox0" force deleted
I1207 00:39:44.390] replicationcontroller "busybox1" force deleted
W1207 00:39:44.490] I1207 00:39:43.978400   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143176-22500", Name:"busybox0", UID:"9722917a-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1130", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-975zk
W1207 00:39:44.491] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 00:39:44.491] I1207 00:39:43.983135   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143176-22500", Name:"busybox1", UID:"97237cd4-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1132", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-hpg55
W1207 00:39:44.491] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 00:39:44.492] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 00:39:45.408] +++ exit code: 0
I1207 00:39:45.436] Recording: run_namespace_tests
I1207 00:39:45.437] Running command: run_namespace_tests
I1207 00:39:45.453] 
I1207 00:39:45.455] +++ Running case: test-cmd.run_namespace_tests 
I1207 00:39:45.457] +++ working dir: /go/src/k8s.io/kubernetes
I1207 00:39:45.459] +++ command: run_namespace_tests
I1207 00:39:45.467] +++ [1207 00:39:45] Testing kubectl(v1:namespaces)
I1207 00:39:45.529] namespace/my-namespace created
I1207 00:39:45.614] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1207 00:39:45.681] (Bnamespace "my-namespace" deleted
I1207 00:39:50.788] namespace/my-namespace condition met
I1207 00:39:50.864] Successful
I1207 00:39:50.864] message:Error from server (NotFound): namespaces "my-namespace" not found
I1207 00:39:50.864] has: not found
I1207 00:39:50.963] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I1207 00:39:51.025] (Bnamespace/other created
I1207 00:39:51.106] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I1207 00:39:51.189] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:39:51.312] (Bpod/valid-pod created
I1207 00:39:51.402] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 00:39:51.481] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 00:39:51.555] (BSuccessful
I1207 00:39:51.555] message:error: a resource cannot be retrieved by name across all namespaces
I1207 00:39:51.555] has:a resource cannot be retrieved by name across all namespaces
I1207 00:39:51.640] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 00:39:51.714] (Bpod "valid-pod" force deleted
I1207 00:39:51.804] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:39:51.872] (Bnamespace "other" deleted
W1207 00:39:51.973] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 00:39:53.981] E1207 00:39:53.981115   55610 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 00:39:54.292] I1207 00:39:54.292013   55610 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 00:39:54.393] I1207 00:39:54.392385   55610 controller_utils.go:1034] Caches are synced for garbage collector controller
W1207 00:39:55.112] I1207 00:39:55.111587   55610 horizontal.go:309] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1544143176-22500
W1207 00:39:55.115] I1207 00:39:55.115381   55610 horizontal.go:309] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1544143176-22500
W1207 00:39:55.786] I1207 00:39:55.785406   55610 namespace_controller.go:171] Namespace has been deleted my-namespace
I1207 00:39:56.999] +++ exit code: 0
... skipping 113 lines ...
I1207 00:40:11.886] +++ command: run_client_config_tests
I1207 00:40:11.898] +++ [1207 00:40:11] Creating namespace namespace-1544143211-10663
I1207 00:40:11.967] namespace/namespace-1544143211-10663 created
I1207 00:40:12.028] Context "test" modified.
I1207 00:40:12.035] +++ [1207 00:40:12] Testing client config
I1207 00:40:12.096] Successful
I1207 00:40:12.097] message:error: stat missing: no such file or directory
I1207 00:40:12.097] has:missing: no such file or directory
I1207 00:40:12.161] Successful
I1207 00:40:12.161] message:error: stat missing: no such file or directory
I1207 00:40:12.161] has:missing: no such file or directory
I1207 00:40:12.222] Successful
I1207 00:40:12.222] message:error: stat missing: no such file or directory
I1207 00:40:12.222] has:missing: no such file or directory
I1207 00:40:12.284] Successful
I1207 00:40:12.285] message:Error in configuration: context was not found for specified context: missing-context
I1207 00:40:12.285] has:context was not found for specified context: missing-context
I1207 00:40:12.347] Successful
I1207 00:40:12.348] message:error: no server found for cluster "missing-cluster"
I1207 00:40:12.348] has:no server found for cluster "missing-cluster"
I1207 00:40:12.413] Successful
I1207 00:40:12.414] message:error: auth info "missing-user" does not exist
I1207 00:40:12.414] has:auth info "missing-user" does not exist
I1207 00:40:12.537] Successful
I1207 00:40:12.537] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1207 00:40:12.538] has:Error loading config file
I1207 00:40:12.604] Successful
I1207 00:40:12.604] message:error: stat missing-config: no such file or directory
I1207 00:40:12.604] has:no such file or directory
I1207 00:40:12.619] +++ exit code: 0
I1207 00:40:12.658] Recording: run_service_accounts_tests
I1207 00:40:12.658] Running command: run_service_accounts_tests
I1207 00:40:12.677] 
I1207 00:40:12.679] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 76 lines ...
I1207 00:40:19.761]                 job-name=test-job
I1207 00:40:19.761]                 run=pi
I1207 00:40:19.761] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1207 00:40:19.761] Parallelism:    1
I1207 00:40:19.761] Completions:    1
I1207 00:40:19.761] Start Time:     Fri, 07 Dec 2018 00:40:19 +0000
I1207 00:40:19.761] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1207 00:40:19.761] Pod Template:
I1207 00:40:19.761]   Labels:  controller-uid=ac52e476-f9b8-11e8-843b-0242ac110002
I1207 00:40:19.761]            job-name=test-job
I1207 00:40:19.762]            run=pi
I1207 00:40:19.762]   Containers:
I1207 00:40:19.762]    pi:
... skipping 329 lines ...
I1207 00:40:29.033]   selector:
I1207 00:40:29.033]     role: padawan
I1207 00:40:29.033]   sessionAffinity: None
I1207 00:40:29.033]   type: ClusterIP
I1207 00:40:29.033] status:
I1207 00:40:29.033]   loadBalancer: {}
W1207 00:40:29.133] error: you must specify resources by --filename when --local is set.
W1207 00:40:29.134] Example resource specifications include:
W1207 00:40:29.134]    '-f rsrc.yaml'
W1207 00:40:29.134]    '--filename=rsrc.json'
I1207 00:40:29.234] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1207 00:40:29.337] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1207 00:40:29.411] (Bservice "redis-master" deleted
... skipping 93 lines ...
I1207 00:40:34.726] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 00:40:34.813] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 00:40:34.904] (Bdaemonset.extensions/bind rolled back
I1207 00:40:34.993] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 00:40:35.078] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 00:40:35.169] (BSuccessful
I1207 00:40:35.170] message:error: unable to find specified revision 1000000 in history
I1207 00:40:35.170] has:unable to find specified revision
I1207 00:40:35.248] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 00:40:35.331] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 00:40:35.424] (Bdaemonset.extensions/bind rolled back
I1207 00:40:35.508] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1207 00:40:35.592] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I1207 00:40:36.806] Namespace:    namespace-1544143235-19905
I1207 00:40:36.807] Selector:     app=guestbook,tier=frontend
I1207 00:40:36.807] Labels:       app=guestbook
I1207 00:40:36.807]               tier=frontend
I1207 00:40:36.807] Annotations:  <none>
I1207 00:40:36.807] Replicas:     3 current / 3 desired
I1207 00:40:36.807] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:40:36.807] Pod Template:
I1207 00:40:36.807]   Labels:  app=guestbook
I1207 00:40:36.807]            tier=frontend
I1207 00:40:36.807]   Containers:
I1207 00:40:36.807]    php-redis:
I1207 00:40:36.807]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 00:40:36.912] Namespace:    namespace-1544143235-19905
I1207 00:40:36.912] Selector:     app=guestbook,tier=frontend
I1207 00:40:36.912] Labels:       app=guestbook
I1207 00:40:36.912]               tier=frontend
I1207 00:40:36.912] Annotations:  <none>
I1207 00:40:36.912] Replicas:     3 current / 3 desired
I1207 00:40:36.912] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:40:36.912] Pod Template:
I1207 00:40:36.913]   Labels:  app=guestbook
I1207 00:40:36.913]            tier=frontend
I1207 00:40:36.913]   Containers:
I1207 00:40:36.913]    php-redis:
I1207 00:40:36.913]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1207 00:40:37.015] Namespace:    namespace-1544143235-19905
I1207 00:40:37.015] Selector:     app=guestbook,tier=frontend
I1207 00:40:37.015] Labels:       app=guestbook
I1207 00:40:37.015]               tier=frontend
I1207 00:40:37.015] Annotations:  <none>
I1207 00:40:37.015] Replicas:     3 current / 3 desired
I1207 00:40:37.015] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:40:37.015] Pod Template:
I1207 00:40:37.015]   Labels:  app=guestbook
I1207 00:40:37.015]            tier=frontend
I1207 00:40:37.015]   Containers:
I1207 00:40:37.015]    php-redis:
I1207 00:40:37.016]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1207 00:40:37.117] Namespace:    namespace-1544143235-19905
I1207 00:40:37.117] Selector:     app=guestbook,tier=frontend
I1207 00:40:37.117] Labels:       app=guestbook
I1207 00:40:37.117]               tier=frontend
I1207 00:40:37.117] Annotations:  <none>
I1207 00:40:37.117] Replicas:     3 current / 3 desired
I1207 00:40:37.117] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:40:37.117] Pod Template:
I1207 00:40:37.117]   Labels:  app=guestbook
I1207 00:40:37.117]            tier=frontend
I1207 00:40:37.117]   Containers:
I1207 00:40:37.117]    php-redis:
I1207 00:40:37.118]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 24 lines ...
I1207 00:40:37.321] Namespace:    namespace-1544143235-19905
I1207 00:40:37.321] Selector:     app=guestbook,tier=frontend
I1207 00:40:37.322] Labels:       app=guestbook
I1207 00:40:37.322]               tier=frontend
I1207 00:40:37.322] Annotations:  <none>
I1207 00:40:37.322] Replicas:     3 current / 3 desired
I1207 00:40:37.322] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:40:37.322] Pod Template:
I1207 00:40:37.322]   Labels:  app=guestbook
I1207 00:40:37.322]            tier=frontend
I1207 00:40:37.322]   Containers:
I1207 00:40:37.322]    php-redis:
I1207 00:40:37.322]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 00:40:37.353] Namespace:    namespace-1544143235-19905
I1207 00:40:37.353] Selector:     app=guestbook,tier=frontend
I1207 00:40:37.353] Labels:       app=guestbook
I1207 00:40:37.353]               tier=frontend
I1207 00:40:37.353] Annotations:  <none>
I1207 00:40:37.353] Replicas:     3 current / 3 desired
I1207 00:40:37.354] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:40:37.354] Pod Template:
I1207 00:40:37.354]   Labels:  app=guestbook
I1207 00:40:37.354]            tier=frontend
I1207 00:40:37.354]   Containers:
I1207 00:40:37.354]    php-redis:
I1207 00:40:37.354]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 00:40:37.449] Namespace:    namespace-1544143235-19905
I1207 00:40:37.449] Selector:     app=guestbook,tier=frontend
I1207 00:40:37.449] Labels:       app=guestbook
I1207 00:40:37.449]               tier=frontend
I1207 00:40:37.450] Annotations:  <none>
I1207 00:40:37.450] Replicas:     3 current / 3 desired
I1207 00:40:37.450] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:40:37.450] Pod Template:
I1207 00:40:37.450]   Labels:  app=guestbook
I1207 00:40:37.450]            tier=frontend
I1207 00:40:37.450]   Containers:
I1207 00:40:37.450]    php-redis:
I1207 00:40:37.450]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1207 00:40:37.551] Namespace:    namespace-1544143235-19905
I1207 00:40:37.551] Selector:     app=guestbook,tier=frontend
I1207 00:40:37.551] Labels:       app=guestbook
I1207 00:40:37.551]               tier=frontend
I1207 00:40:37.551] Annotations:  <none>
I1207 00:40:37.551] Replicas:     3 current / 3 desired
I1207 00:40:37.551] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:40:37.551] Pod Template:
I1207 00:40:37.552]   Labels:  app=guestbook
I1207 00:40:37.552]            tier=frontend
I1207 00:40:37.552]   Containers:
I1207 00:40:37.552]    php-redis:
I1207 00:40:37.552]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I1207 00:40:38.321] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I1207 00:40:38.405] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I1207 00:40:38.486] (Breplicationcontroller/frontend scaled
I1207 00:40:38.573] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I1207 00:40:38.647] (Breplicationcontroller "frontend" deleted
W1207 00:40:38.747] I1207 00:40:37.725227   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143235-19905", Name:"frontend", UID:"b67de017-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-nb2vj
W1207 00:40:38.748] error: Expected replicas to be 3, was 2
W1207 00:40:38.748] I1207 00:40:38.238489   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143235-19905", Name:"frontend", UID:"b67de017-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ctn4j
W1207 00:40:38.748] I1207 00:40:38.491455   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143235-19905", Name:"frontend", UID:"b67de017-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1394", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-ctn4j
W1207 00:40:38.798] I1207 00:40:38.796802   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143235-19905", Name:"redis-master", UID:"b7cf40be-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1405", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-nlscd
I1207 00:40:38.898] replicationcontroller/redis-master created
I1207 00:40:38.934] replicationcontroller/redis-slave created
I1207 00:40:39.026] replicationcontroller/redis-master scaled
... skipping 29 lines ...
I1207 00:40:40.356] service "expose-test-deployment" deleted
I1207 00:40:40.449] Successful
I1207 00:40:40.449] message:service/expose-test-deployment exposed
I1207 00:40:40.449] has:service/expose-test-deployment exposed
I1207 00:40:40.524] service "expose-test-deployment" deleted
I1207 00:40:40.610] Successful
I1207 00:40:40.610] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 00:40:40.610] See 'kubectl expose -h' for help and examples
I1207 00:40:40.610] has:invalid deployment: no selectors
I1207 00:40:40.687] Successful
I1207 00:40:40.688] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 00:40:40.688] See 'kubectl expose -h' for help and examples
I1207 00:40:40.688] has:invalid deployment: no selectors
W1207 00:40:40.789] I1207 00:40:39.800480   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment", UID:"b8687482-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1460", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-659fc6fb to 3
W1207 00:40:40.789] I1207 00:40:39.802987   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-659fc6fb", UID:"b868f956-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-n4bvx
W1207 00:40:40.789] I1207 00:40:39.804905   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-659fc6fb", UID:"b868f956-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-m2sfb
W1207 00:40:40.789] I1207 00:40:39.805908   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-659fc6fb", UID:"b868f956-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-fgzbt
... skipping 27 lines ...
I1207 00:40:42.497] service "frontend" deleted
I1207 00:40:42.503] service "frontend-2" deleted
I1207 00:40:42.509] service "frontend-3" deleted
I1207 00:40:42.514] service "frontend-4" deleted
I1207 00:40:42.520] service "frontend-5" deleted
I1207 00:40:42.609] Successful
I1207 00:40:42.609] message:error: cannot expose a Node
I1207 00:40:42.610] has:cannot expose
I1207 00:40:42.689] Successful
I1207 00:40:42.690] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1207 00:40:42.690] has:metadata.name: Invalid value
I1207 00:40:42.772] Successful
I1207 00:40:42.772] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 33 lines ...
I1207 00:40:44.808] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 00:40:44.898] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 00:40:44.971] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1207 00:40:45.072] I1207 00:40:44.406083   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143235-19905", Name:"frontend", UID:"bb271953-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hwzr6
W1207 00:40:45.072] I1207 00:40:44.408407   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143235-19905", Name:"frontend", UID:"bb271953-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-c952q
W1207 00:40:45.073] I1207 00:40:44.408486   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143235-19905", Name:"frontend", UID:"bb271953-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f6rbv
W1207 00:40:45.073] Error: required flag(s) "max" not set
W1207 00:40:45.073] 
W1207 00:40:45.073] 
W1207 00:40:45.073] Examples:
W1207 00:40:45.073]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 00:40:45.074]   kubectl autoscale deployment foo --min=2 --max=10
W1207 00:40:45.074]   
... skipping 54 lines ...
I1207 00:40:45.270]           limits:
I1207 00:40:45.270]             cpu: 300m
I1207 00:40:45.270]           requests:
I1207 00:40:45.270]             cpu: 300m
I1207 00:40:45.270]       terminationGracePeriodSeconds: 0
I1207 00:40:45.270] status: {}
W1207 00:40:45.371] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I1207 00:40:45.482] deployment.extensions/nginx-deployment-resources created
I1207 00:40:45.571] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1207 00:40:45.655] (Bcore.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 00:40:45.736] (Bcore.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1207 00:40:45.817] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I1207 00:40:45.912] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 81 lines ...
W1207 00:40:46.872] I1207 00:40:45.491091   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-69c96fd869", UID:"bbcc4fe0-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-b95t7
W1207 00:40:46.872] I1207 00:40:45.821108   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources", UID:"bbcbcb0a-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W1207 00:40:46.872] I1207 00:40:45.824112   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-6c5996c457", UID:"bbff8e39-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-h5xp6
W1207 00:40:46.873] I1207 00:40:45.826779   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources", UID:"bbcbcb0a-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W1207 00:40:46.873] I1207 00:40:45.831481   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-69c96fd869", UID:"bbcc4fe0-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-n4qlz
W1207 00:40:46.873] I1207 00:40:45.831483   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources", UID:"bbcbcb0a-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 2
W1207 00:40:46.874] E1207 00:40:45.832350   55610 replica_set.go:450] Sync "namespace-1544143235-19905/nginx-deployment-resources-6c5996c457" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-resources-6c5996c457": the object has been modified; please apply your changes to the latest version and try again
W1207 00:40:46.874] I1207 00:40:45.835307   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-6c5996c457", UID:"bbff8e39-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1675", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-8zq9t
W1207 00:40:46.874] error: unable to find container named redis
W1207 00:40:46.874] I1207 00:40:46.168749   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources", UID:"bbcbcb0a-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1689", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 0
W1207 00:40:46.874] I1207 00:40:46.172640   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-69c96fd869", UID:"bbcc4fe0-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-b95t7
W1207 00:40:46.875] I1207 00:40:46.172685   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-69c96fd869", UID:"bbcc4fe0-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-ps98g
W1207 00:40:46.875] I1207 00:40:46.174668   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources", UID:"bbcbcb0a-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 2
W1207 00:40:46.875] I1207 00:40:46.177170   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-5f4579485f", UID:"bc33dc09-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-p8bfx
W1207 00:40:46.875] I1207 00:40:46.178858   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-5f4579485f", UID:"bc33dc09-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-cvqmz
W1207 00:40:46.876] I1207 00:40:46.430080   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources", UID:"bbcbcb0a-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1715", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-6c5996c457 to 0
W1207 00:40:46.876] I1207 00:40:46.434665   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-6c5996c457", UID:"bbff8e39-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1719", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-8zq9t
W1207 00:40:46.876] I1207 00:40:46.435405   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-6c5996c457", UID:"bbff8e39-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1719", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-h5xp6
W1207 00:40:46.876] I1207 00:40:46.436097   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources", UID:"bbcbcb0a-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1717", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 2
W1207 00:40:46.877] I1207 00:40:46.489102   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-ff8d89cb6", UID:"bc5bb559-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-w295k
W1207 00:40:46.877] I1207 00:40:46.639260   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143235-19905", Name:"nginx-deployment-resources-ff8d89cb6", UID:"bc5bb559-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-g69z9
W1207 00:40:46.877] error: you must specify resources by --filename when --local is set.
W1207 00:40:46.877] Example resource specifications include:
W1207 00:40:46.877]    '-f rsrc.yaml'
W1207 00:40:46.877]    '--filename=rsrc.json'
I1207 00:40:46.978] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1207 00:40:47.004] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1207 00:40:47.087] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I1207 00:40:48.457]                 pod-template-hash=55c9b846cc
I1207 00:40:48.457] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1207 00:40:48.457]                 deployment.kubernetes.io/max-replicas: 2
I1207 00:40:48.457]                 deployment.kubernetes.io/revision: 1
I1207 00:40:48.457] Controlled By:  Deployment/test-nginx-apps
I1207 00:40:48.457] Replicas:       1 current / 1 desired
I1207 00:40:48.457] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 00:40:48.457] Pod Template:
I1207 00:40:48.457]   Labels:  app=test-nginx-apps
I1207 00:40:48.457]            pod-template-hash=55c9b846cc
I1207 00:40:48.458]   Containers:
I1207 00:40:48.458]    nginx:
I1207 00:40:48.458]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 86 lines ...
W1207 00:40:51.959] I1207 00:40:51.374461   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-6f6bb85d9c", UID:"bf4e12c8-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1880", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-x7p7w
W1207 00:40:51.959] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W1207 00:40:51.959] I1207 00:40:51.858485   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx", UID:"bf4d7856-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1893", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 1
W1207 00:40:51.960] I1207 00:40:51.862607   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-9486b7cb7", UID:"bf98d1d5-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1894", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-lxnqb
W1207 00:40:51.960] I1207 00:40:51.864227   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx", UID:"bf4d7856-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1893", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-6f6bb85d9c to 2
W1207 00:40:51.960] I1207 00:40:51.869655   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-6f6bb85d9c", UID:"bf4e12c8-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1898", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-6f6bb85d9c-gvl99
W1207 00:40:51.960] E1207 00:40:51.881016   55610 replica_set.go:450] Sync "namespace-1544143247-27264/nginx-9486b7cb7" failed with Operation cannot be fulfilled on replicasets.apps "nginx-9486b7cb7": the object has been modified; please apply your changes to the latest version and try again
W1207 00:40:51.961] I1207 00:40:51.881711   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx", UID:"bf4d7856-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1895", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 2
W1207 00:40:51.961] I1207 00:40:51.887936   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-9486b7cb7", UID:"bf98d1d5-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1908", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-b22cj
I1207 00:40:52.061] apps.sh:293: Successful get deployment.extensions {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 00:40:52.062] (B    Image:	k8s.gcr.io/nginx:test-cmd
I1207 00:40:52.136] apps.sh:296: Successful get deployment.extensions {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 00:40:52.231] (Bdeployment.extensions/nginx rolled back
I1207 00:40:53.318] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 00:40:53.486] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 00:40:53.573] (Bdeployment.extensions/nginx rolled back
W1207 00:40:53.673] error: unable to find specified revision 1000000 in history
I1207 00:40:54.656] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 00:40:54.734] (Bdeployment.extensions/nginx paused
W1207 00:40:54.834] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I1207 00:40:54.935] deployment.extensions/nginx resumed
I1207 00:40:55.016] deployment.extensions/nginx rolled back
I1207 00:40:55.186]     deployment.kubernetes.io/revision-history: 1,3
W1207 00:40:55.365] error: desired revision (3) is different from the running revision (5)
I1207 00:40:55.503] deployment.extensions/nginx2 created
I1207 00:40:55.581] deployment.extensions "nginx2" deleted
I1207 00:40:55.654] deployment.extensions "nginx" deleted
I1207 00:40:55.755] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:40:55.884] (Bdeployment.extensions/nginx-deployment created
I1207 00:40:55.978] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 29 lines ...
W1207 00:40:58.081] I1207 00:40:56.233249   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c1feff7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1983", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W1207 00:40:58.081] I1207 00:40:56.235449   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-85db47bbdb", UID:"c2346181-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1984", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-kq5z2
W1207 00:40:58.082] I1207 00:40:56.239652   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c1feff7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1983", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 00:40:58.082] I1207 00:40:56.245553   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-646d4f779d", UID:"c1ff7f4f-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1990", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-bwkqv
W1207 00:40:58.082] I1207 00:40:56.245990   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c1feff7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1986", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 2
W1207 00:40:58.083] I1207 00:40:56.250961   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-85db47bbdb", UID:"c2346181-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-qj9vs
W1207 00:40:58.083] error: unable to find container named "redis"
W1207 00:40:58.083] I1207 00:40:57.282842   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c1feff7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2017", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1207 00:40:58.083] I1207 00:40:57.288058   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c1feff7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 2
W1207 00:40:58.084] I1207 00:40:57.288173   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-646d4f779d", UID:"c1ff7f4f-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-mc7f7
W1207 00:40:58.084] I1207 00:40:57.288486   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-646d4f779d", UID:"c1ff7f4f-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-bb587
W1207 00:40:58.084] I1207 00:40:57.290510   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-dc756cc6", UID:"c2d3c06f-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-7fdpr
W1207 00:40:58.084] I1207 00:40:57.292682   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-dc756cc6", UID:"c2d3c06f-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-bqs5m
... skipping 54 lines ...
I1207 00:41:01.296] Namespace:    namespace-1544143259-18896
I1207 00:41:01.296] Selector:     app=guestbook,tier=frontend
I1207 00:41:01.296] Labels:       app=guestbook
I1207 00:41:01.296]               tier=frontend
I1207 00:41:01.296] Annotations:  <none>
I1207 00:41:01.296] Replicas:     3 current / 3 desired
I1207 00:41:01.297] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:01.297] Pod Template:
I1207 00:41:01.297]   Labels:  app=guestbook
I1207 00:41:01.297]            tier=frontend
I1207 00:41:01.297]   Containers:
I1207 00:41:01.297]    php-redis:
I1207 00:41:01.297]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 00:41:01.396] Namespace:    namespace-1544143259-18896
I1207 00:41:01.396] Selector:     app=guestbook,tier=frontend
I1207 00:41:01.396] Labels:       app=guestbook
I1207 00:41:01.396]               tier=frontend
I1207 00:41:01.396] Annotations:  <none>
I1207 00:41:01.396] Replicas:     3 current / 3 desired
I1207 00:41:01.396] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:01.397] Pod Template:
I1207 00:41:01.397]   Labels:  app=guestbook
I1207 00:41:01.397]            tier=frontend
I1207 00:41:01.397]   Containers:
I1207 00:41:01.397]    php-redis:
I1207 00:41:01.397]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1207 00:41:01.489] Namespace:    namespace-1544143259-18896
I1207 00:41:01.489] Selector:     app=guestbook,tier=frontend
I1207 00:41:01.489] Labels:       app=guestbook
I1207 00:41:01.489]               tier=frontend
I1207 00:41:01.489] Annotations:  <none>
I1207 00:41:01.489] Replicas:     3 current / 3 desired
I1207 00:41:01.489] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:01.490] Pod Template:
I1207 00:41:01.490]   Labels:  app=guestbook
I1207 00:41:01.490]            tier=frontend
I1207 00:41:01.490]   Containers:
I1207 00:41:01.490]    php-redis:
I1207 00:41:01.490]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I1207 00:41:01.589] Namespace:    namespace-1544143259-18896
I1207 00:41:01.589] Selector:     app=guestbook,tier=frontend
I1207 00:41:01.589] Labels:       app=guestbook
I1207 00:41:01.589]               tier=frontend
I1207 00:41:01.589] Annotations:  <none>
I1207 00:41:01.589] Replicas:     3 current / 3 desired
I1207 00:41:01.589] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:01.590] Pod Template:
I1207 00:41:01.590]   Labels:  app=guestbook
I1207 00:41:01.590]            tier=frontend
I1207 00:41:01.590]   Containers:
I1207 00:41:01.590]    php-redis:
I1207 00:41:01.590]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 15 lines ...
I1207 00:41:01.591] (B
W1207 00:41:01.692] I1207 00:40:58.617385   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c33e8f7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2071", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 1
W1207 00:41:01.692] I1207 00:40:58.620469   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-5b795689cd", UID:"c3a02589-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-trhqn
W1207 00:41:01.692] I1207 00:40:58.623414   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c33e8f7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2071", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 00:41:01.693] I1207 00:40:58.626831   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-646d4f779d", UID:"c33f1ec8-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-56d9l
W1207 00:41:01.693] I1207 00:40:58.628797   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c33e8f7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2074", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 2
W1207 00:41:01.693] E1207 00:40:58.630146   55610 replica_set.go:450] Sync "namespace-1544143247-27264/nginx-deployment-5b795689cd" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5b795689cd": the object has been modified; please apply your changes to the latest version and try again
W1207 00:41:01.693] I1207 00:40:58.632375   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-5b795689cd", UID:"c3a02589-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2083", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-5z8d6
W1207 00:41:01.694] I1207 00:40:58.875942   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c33e8f7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2095", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1207 00:41:01.694] I1207 00:40:58.880459   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-646d4f779d", UID:"c33f1ec8-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2099", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-5hsxk
W1207 00:41:01.694] I1207 00:40:58.880499   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-646d4f779d", UID:"c33f1ec8-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2099", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-4thvr
W1207 00:41:01.694] I1207 00:40:58.882925   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c33e8f7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2097", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5766b7c95b to 2
W1207 00:41:01.695] I1207 00:40:58.885988   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-5766b7c95b", UID:"c3c6b718-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5766b7c95b-wgk6t
... skipping 3 lines ...
W1207 00:41:01.696] I1207 00:40:59.136005   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c33e8f7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2126", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5b795689cd to 0
W1207 00:41:01.696] I1207 00:40:59.140841   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c33e8f7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2128", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-65b869c68c to 2
W1207 00:41:01.696] I1207 00:40:59.317661   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c33e8f7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2136", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-65b869c68c to 0
W1207 00:41:01.696] I1207 00:40:59.412288   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-5b795689cd", UID:"c3a02589-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2129", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-trhqn
W1207 00:41:01.697] I1207 00:40:59.462310   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment-5b795689cd", UID:"c3a02589-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2129", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-5z8d6
W1207 00:41:01.697] I1207 00:40:59.469918   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143247-27264", Name:"nginx-deployment", UID:"c33e8f7d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2139", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7b8f7659b7 to 2
W1207 00:41:01.697] E1207 00:40:59.508999   55610 replica_set.go:450] Sync "namespace-1544143247-27264/nginx-deployment-794dcdf6bb" failed with replicasets.apps "nginx-deployment-794dcdf6bb" not found
W1207 00:41:01.697] E1207 00:40:59.559072   55610 replica_set.go:450] Sync "namespace-1544143247-27264/nginx-deployment-646d4f779d" failed with replicasets.apps "nginx-deployment-646d4f779d" not found
W1207 00:41:01.697] I1207 00:40:59.565898   55610 horizontal.go:309] Horizontal Pod Autoscaler frontend has been deleted in namespace-1544143235-19905
W1207 00:41:01.698] E1207 00:40:59.609529   55610 replica_set.go:450] Sync "namespace-1544143247-27264/nginx-deployment-5766b7c95b" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5766b7c95b": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544143247-27264/nginx-deployment-5766b7c95b, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c3c6b718-f9b8-11e8-843b-0242ac110002, UID in object meta: 
W1207 00:41:01.698] E1207 00:40:59.659061   55610 replica_set.go:450] Sync "namespace-1544143247-27264/nginx-deployment-65b869c68c" failed with replicasets.apps "nginx-deployment-65b869c68c" not found
W1207 00:41:01.698] E1207 00:40:59.858929   55610 replica_set.go:450] Sync "namespace-1544143247-27264/nginx-deployment-5b795689cd" failed with replicasets.apps "nginx-deployment-5b795689cd" not found
W1207 00:41:01.698] I1207 00:41:00.045779   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c47968f3-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2167", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-g6c5r
W1207 00:41:01.698] I1207 00:41:00.047808   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c47968f3-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2167", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8cknm
W1207 00:41:01.699] I1207 00:41:00.059230   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c47968f3-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2167", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-m24x8
W1207 00:41:01.699] E1207 00:41:00.258817   55610 replica_set.go:450] Sync "namespace-1544143259-18896/frontend" failed with replicasets.apps "frontend" not found
W1207 00:41:01.699] I1207 00:41:00.396430   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend-no-cascade", UID:"c4af3368-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-xs8cw
W1207 00:41:01.699] I1207 00:41:00.398505   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend-no-cascade", UID:"c4af3368-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-r28wv
W1207 00:41:01.699] I1207 00:41:00.460160   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend-no-cascade", UID:"c4af3368-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-q58qk
W1207 00:41:01.700] E1207 00:41:00.658843   55610 replica_set.go:450] Sync "namespace-1544143259-18896/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W1207 00:41:01.700] I1207 00:41:01.082112   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c517c946-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2200", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-x6fhq
W1207 00:41:01.700] I1207 00:41:01.084338   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c517c946-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2200", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5px7p
W1207 00:41:01.700] I1207 00:41:01.084372   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c517c946-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2200", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hd6kz
I1207 00:41:01.801] Successful describe rs:
I1207 00:41:01.801] Name:         frontend
I1207 00:41:01.801] Namespace:    namespace-1544143259-18896
I1207 00:41:01.801] Selector:     app=guestbook,tier=frontend
I1207 00:41:01.801] Labels:       app=guestbook
I1207 00:41:01.801]               tier=frontend
I1207 00:41:01.801] Annotations:  <none>
I1207 00:41:01.801] Replicas:     3 current / 3 desired
I1207 00:41:01.801] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:01.802] Pod Template:
I1207 00:41:01.802]   Labels:  app=guestbook
I1207 00:41:01.802]            tier=frontend
I1207 00:41:01.802]   Containers:
I1207 00:41:01.802]    php-redis:
I1207 00:41:01.802]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 00:41:01.816] Namespace:    namespace-1544143259-18896
I1207 00:41:01.816] Selector:     app=guestbook,tier=frontend
I1207 00:41:01.816] Labels:       app=guestbook
I1207 00:41:01.816]               tier=frontend
I1207 00:41:01.816] Annotations:  <none>
I1207 00:41:01.816] Replicas:     3 current / 3 desired
I1207 00:41:01.816] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:01.817] Pod Template:
I1207 00:41:01.817]   Labels:  app=guestbook
I1207 00:41:01.817]            tier=frontend
I1207 00:41:01.817]   Containers:
I1207 00:41:01.817]    php-redis:
I1207 00:41:01.817]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 00:41:01.911] Namespace:    namespace-1544143259-18896
I1207 00:41:01.911] Selector:     app=guestbook,tier=frontend
I1207 00:41:01.911] Labels:       app=guestbook
I1207 00:41:01.911]               tier=frontend
I1207 00:41:01.911] Annotations:  <none>
I1207 00:41:01.911] Replicas:     3 current / 3 desired
I1207 00:41:01.911] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:01.911] Pod Template:
I1207 00:41:01.912]   Labels:  app=guestbook
I1207 00:41:01.912]            tier=frontend
I1207 00:41:01.912]   Containers:
I1207 00:41:01.912]    php-redis:
I1207 00:41:01.912]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1207 00:41:02.012] Namespace:    namespace-1544143259-18896
I1207 00:41:02.012] Selector:     app=guestbook,tier=frontend
I1207 00:41:02.012] Labels:       app=guestbook
I1207 00:41:02.012]               tier=frontend
I1207 00:41:02.012] Annotations:  <none>
I1207 00:41:02.012] Replicas:     3 current / 3 desired
I1207 00:41:02.012] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:02.012] Pod Template:
I1207 00:41:02.012]   Labels:  app=guestbook
I1207 00:41:02.012]            tier=frontend
I1207 00:41:02.013]   Containers:
I1207 00:41:02.013]    php-redis:
I1207 00:41:02.013]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 137 lines ...
W1207 00:41:04.004] I1207 00:41:03.486236   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"scale-1-9bdb56f49", UID:"c5f36147-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2268", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-9bdb56f49-67qxz
W1207 00:41:04.004] I1207 00:41:03.489041   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143259-18896", Name:"scale-2", UID:"c608259c-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2272", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-9bdb56f49 to 3
W1207 00:41:04.005] I1207 00:41:03.491810   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"scale-2-9bdb56f49", UID:"c608abfa-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2276", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-9bdb56f49-bjtj6
W1207 00:41:04.005] I1207 00:41:03.500983   55610 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544143259-18896", Name:"scale-3", UID:"c61e2c51-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2285", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-9bdb56f49 to 3
W1207 00:41:04.005] I1207 00:41:03.609909   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"scale-3-9bdb56f49", UID:"c61eaf13-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2286", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-9bdb56f49-th2m5
W1207 00:41:04.005] I1207 00:41:03.810038   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"scale-3-9bdb56f49", UID:"c61eaf13-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2286", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-9bdb56f49-qkm8v
W1207 00:41:04.059] E1207 00:41:04.058934   55610 replica_set.go:450] Sync "namespace-1544143259-18896/scale-3-9bdb56f49" failed with replicasets.apps "scale-3-9bdb56f49" not found
W1207 00:41:04.109] E1207 00:41:04.108866   55610 replica_set.go:450] Sync "namespace-1544143259-18896/scale-2-9bdb56f49" failed with replicasets.apps "scale-2-9bdb56f49" not found
W1207 00:41:04.161] I1207 00:41:04.160631   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c6dc79a8-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6k7fw
W1207 00:41:04.210] I1207 00:41:04.209447   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c6dc79a8-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xc4nf
W1207 00:41:04.310] I1207 00:41:04.310107   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c6dc79a8-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gv9q9
I1207 00:41:04.411] replicaset.extensions/frontend created
I1207 00:41:04.411] apps.sh:587: Successful get rs frontend {{.spec.replicas}}: 3
I1207 00:41:04.411] (Bservice/frontend exposed
... skipping 35 lines ...
I1207 00:41:06.756] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 00:41:06.828] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1207 00:41:06.929] I1207 00:41:05.893986   55610 horizontal.go:309] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1544143247-27264
W1207 00:41:06.930] I1207 00:41:06.270955   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c82f7c6d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kmcn2
W1207 00:41:06.930] I1207 00:41:06.273507   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c82f7c6d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gbfrx
W1207 00:41:06.930] I1207 00:41:06.273549   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544143259-18896", Name:"frontend", UID:"c82f7c6d-f9b8-11e8-843b-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ndshh
W1207 00:41:06.931] Error: required flag(s) "max" not set
W1207 00:41:06.931] 
W1207 00:41:06.931] 
W1207 00:41:06.931] Examples:
W1207 00:41:06.931]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 00:41:06.931]   kubectl autoscale deployment foo --min=2 --max=10
W1207 00:41:06.931]   
... skipping 85 lines ...
I1207 00:41:09.601] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 00:41:09.680] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 00:41:09.774] (Bstatefulset.apps/nginx rolled back
I1207 00:41:09.857] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 00:41:09.935] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 00:41:10.027] (BSuccessful
I1207 00:41:10.027] message:error: unable to find specified revision 1000000 in history
I1207 00:41:10.028] has:unable to find specified revision
I1207 00:41:10.107] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 00:41:10.188] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 00:41:10.279] (Bstatefulset.apps/nginx rolled back
I1207 00:41:10.364] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I1207 00:41:10.443] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 61 lines ...
I1207 00:41:12.084] Name:         mock
I1207 00:41:12.084] Namespace:    namespace-1544143271-10635
I1207 00:41:12.085] Selector:     app=mock
I1207 00:41:12.085] Labels:       app=mock
I1207 00:41:12.085] Annotations:  <none>
I1207 00:41:12.085] Replicas:     1 current / 1 desired
I1207 00:41:12.085] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:12.085] Pod Template:
I1207 00:41:12.085]   Labels:  app=mock
I1207 00:41:12.085]   Containers:
I1207 00:41:12.085]    mock-container:
I1207 00:41:12.085]     Image:        k8s.gcr.io/pause:2.0
I1207 00:41:12.085]     Port:         9949/TCP
... skipping 56 lines ...
I1207 00:41:14.079] Name:         mock
I1207 00:41:14.079] Namespace:    namespace-1544143271-10635
I1207 00:41:14.079] Selector:     app=mock
I1207 00:41:14.079] Labels:       app=mock
I1207 00:41:14.079] Annotations:  <none>
I1207 00:41:14.079] Replicas:     1 current / 1 desired
I1207 00:41:14.079] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:14.079] Pod Template:
I1207 00:41:14.079]   Labels:  app=mock
I1207 00:41:14.080]   Containers:
I1207 00:41:14.080]    mock-container:
I1207 00:41:14.080]     Image:        k8s.gcr.io/pause:2.0
I1207 00:41:14.080]     Port:         9949/TCP
... skipping 56 lines ...
I1207 00:41:16.007] Name:         mock
I1207 00:41:16.007] Namespace:    namespace-1544143271-10635
I1207 00:41:16.007] Selector:     app=mock
I1207 00:41:16.007] Labels:       app=mock
I1207 00:41:16.008] Annotations:  <none>
I1207 00:41:16.008] Replicas:     1 current / 1 desired
I1207 00:41:16.008] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:16.008] Pod Template:
I1207 00:41:16.008]   Labels:  app=mock
I1207 00:41:16.008]   Containers:
I1207 00:41:16.008]    mock-container:
I1207 00:41:16.008]     Image:        k8s.gcr.io/pause:2.0
I1207 00:41:16.008]     Port:         9949/TCP
... skipping 42 lines ...
I1207 00:41:17.912] Namespace:    namespace-1544143271-10635
I1207 00:41:17.912] Selector:     app=mock
I1207 00:41:17.912] Labels:       app=mock
I1207 00:41:17.912]               status=replaced
I1207 00:41:17.913] Annotations:  <none>
I1207 00:41:17.913] Replicas:     1 current / 1 desired
I1207 00:41:17.913] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:17.913] Pod Template:
I1207 00:41:17.913]   Labels:  app=mock
I1207 00:41:17.913]   Containers:
I1207 00:41:17.913]    mock-container:
I1207 00:41:17.913]     Image:        k8s.gcr.io/pause:2.0
I1207 00:41:17.913]     Port:         9949/TCP
... skipping 11 lines ...
I1207 00:41:17.915] Namespace:    namespace-1544143271-10635
I1207 00:41:17.915] Selector:     app=mock2
I1207 00:41:17.915] Labels:       app=mock2
I1207 00:41:17.915]               status=replaced
I1207 00:41:17.915] Annotations:  <none>
I1207 00:41:17.915] Replicas:     1 current / 1 desired
I1207 00:41:17.915] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 00:41:17.915] Pod Template:
I1207 00:41:17.915]   Labels:  app=mock2
I1207 00:41:17.915]   Containers:
I1207 00:41:17.916]    mock-container:
I1207 00:41:17.916]     Image:        k8s.gcr.io/pause:2.0
I1207 00:41:17.916]     Port:         9949/TCP
... skipping 105 lines ...
I1207 00:41:22.355] Context "test" modified.
I1207 00:41:22.361] +++ [1207 00:41:22] Testing persistent volumes
I1207 00:41:22.445] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:41:22.582] (Bpersistentvolume/pv0001 created
W1207 00:41:22.682] I1207 00:41:21.424715   55610 horizontal.go:309] Horizontal Pod Autoscaler frontend has been deleted in namespace-1544143259-18896
W1207 00:41:22.683] I1207 00:41:21.556892   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143271-10635", Name:"mock", UID:"d14c2aea-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"2655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-27bvz
W1207 00:41:22.683] E1207 00:41:22.594016   55610 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I1207 00:41:22.783] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I1207 00:41:22.784] (Bpersistentvolume "pv0001" deleted
I1207 00:41:22.910] persistentvolume/pv0002 created
I1207 00:41:23.000] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I1207 00:41:23.074] (Bpersistentvolume "pv0002" deleted
W1207 00:41:23.175] E1207 00:41:22.912735   55610 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
W1207 00:41:23.221] E1207 00:41:23.220727   55610 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I1207 00:41:23.321] persistentvolume/pv0003 created
I1207 00:41:23.322] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I1207 00:41:23.384] (Bpersistentvolume "pv0003" deleted
I1207 00:41:23.473] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 00:41:23.486] (B+++ exit code: 0
I1207 00:41:23.528] Recording: run_persistent_volume_claims_tests
... skipping 475 lines ...
I1207 00:41:27.586] yes
I1207 00:41:27.586] has:the server doesn't have a resource type
I1207 00:41:27.657] Successful
I1207 00:41:27.657] message:yes
I1207 00:41:27.657] has:yes
I1207 00:41:27.725] Successful
I1207 00:41:27.726] message:error: --subresource can not be used with NonResourceURL
I1207 00:41:27.726] has:subresource can not be used with NonResourceURL
I1207 00:41:27.798] Successful
I1207 00:41:27.872] Successful
I1207 00:41:27.873] message:yes
I1207 00:41:27.873] 0
I1207 00:41:27.873] has:0
... skipping 6 lines ...
I1207 00:41:28.039] role.rbac.authorization.k8s.io/testing-R reconciled
I1207 00:41:28.126] legacy-script.sh:736: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I1207 00:41:28.211] (Blegacy-script.sh:737: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I1207 00:41:28.296] (Blegacy-script.sh:738: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I1207 00:41:28.379] (Blegacy-script.sh:739: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I1207 00:41:28.450] (BSuccessful
I1207 00:41:28.450] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I1207 00:41:28.450] has:only rbac.authorization.k8s.io/v1 is supported
I1207 00:41:28.531] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I1207 00:41:28.536] role.rbac.authorization.k8s.io "testing-R" deleted
I1207 00:41:28.544] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I1207 00:41:28.550] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I1207 00:41:28.559] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I1207 00:41:29.552] +++ Running case: test-cmd.run_kubectl_explain_tests 
I1207 00:41:29.555] +++ working dir: /go/src/k8s.io/kubernetes
I1207 00:41:29.557] +++ command: run_kubectl_explain_tests
I1207 00:41:29.566] +++ [1207 00:41:29] Testing kubectl(v1:explain)
W1207 00:41:29.667] I1207 00:41:29.444531   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143288-11477", Name:"cassandra", UID:"d5cac578-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"2735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-c7m8r
W1207 00:41:29.668] I1207 00:41:29.449916   55610 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544143288-11477", Name:"cassandra", UID:"d5cac578-f9b8-11e8-843b-0242ac110002", APIVersion:"v1", ResourceVersion:"2742", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-zk4js
W1207 00:41:29.668] E1207 00:41:29.454203   55610 replica_set.go:450] Sync "namespace-1544143288-11477/cassandra" failed with replicationcontrollers "cassandra" not found
I1207 00:41:29.768] KIND:     Pod
I1207 00:41:29.768] VERSION:  v1
I1207 00:41:29.769] 
I1207 00:41:29.769] DESCRIPTION:
I1207 00:41:29.769]      Pod is a collection of containers that can run on a host. This resource is
I1207 00:41:29.769]      created by clients and scheduled onto hosts.
... skipping 849 lines ...
I1207 00:41:53.761] message:node/127.0.0.1 already uncordoned (dry run)
I1207 00:41:53.761] has:already uncordoned
I1207 00:41:53.846] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I1207 00:41:53.919] (Bnode/127.0.0.1 labeled
I1207 00:41:54.015] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I1207 00:41:54.080] (BSuccessful
I1207 00:41:54.081] message:error: cannot specify both a node name and a --selector option
I1207 00:41:54.081] See 'kubectl drain -h' for help and examples
I1207 00:41:54.081] has:cannot specify both a node name
I1207 00:41:54.143] Successful
I1207 00:41:54.144] message:error: USAGE: cordon NODE [flags]
I1207 00:41:54.144] See 'kubectl cordon -h' for help and examples
I1207 00:41:54.144] has:error\: USAGE\: cordon NODE
I1207 00:41:54.214] node/127.0.0.1 already uncordoned
I1207 00:41:54.283] Successful
I1207 00:41:54.284] message:error: You must provide one or more resources by argument or filename.
I1207 00:41:54.284] Example resource specifications include:
I1207 00:41:54.284]    '-f rsrc.yaml'
I1207 00:41:54.284]    '--filename=rsrc.json'
I1207 00:41:54.284]    '<resource> <name>'
I1207 00:41:54.284]    '<resource>'
I1207 00:41:54.284] has:must provide one or more resources
... skipping 15 lines ...
I1207 00:41:54.696] Successful
I1207 00:41:54.696] message:The following kubectl-compatible plugins are available:
I1207 00:41:54.696] 
I1207 00:41:54.696] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I1207 00:41:54.696]   - warning: kubectl-version overwrites existing command: "kubectl version"
I1207 00:41:54.696] 
I1207 00:41:54.696] error: one plugin warning was found
I1207 00:41:54.697] has:kubectl-version overwrites existing command: "kubectl version"
I1207 00:41:54.764] Successful
I1207 00:41:54.764] message:The following kubectl-compatible plugins are available:
I1207 00:41:54.764] 
I1207 00:41:54.765] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 00:41:54.765] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I1207 00:41:54.765]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 00:41:54.765] 
I1207 00:41:54.765] error: one plugin warning was found
I1207 00:41:54.765] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I1207 00:41:54.829] Successful
I1207 00:41:54.833] message:The following kubectl-compatible plugins are available:
I1207 00:41:54.833] 
I1207 00:41:54.833] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 00:41:54.833] has:plugins are available
I1207 00:41:54.895] Successful
I1207 00:41:54.895] message:
I1207 00:41:54.896] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I1207 00:41:54.896] error: unable to find any kubectl plugins in your PATH
I1207 00:41:54.896] has:unable to find any kubectl plugins in your PATH
I1207 00:41:54.959] Successful
I1207 00:41:54.959] message:I am plugin foo
I1207 00:41:54.959] has:plugin foo
I1207 00:41:55.025] Successful
I1207 00:41:55.026] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.897+8a167afe7042db", GitCommit:"8a167afe7042dbbcde1c65b2cd0e20a8f5cdf423", GitTreeState:"clean", BuildDate:"2018-12-07T00:35:42Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I1207 00:41:55.102] 
I1207 00:41:55.104] +++ Running case: test-cmd.run_impersonation_tests 
I1207 00:41:55.106] +++ working dir: /go/src/k8s.io/kubernetes
I1207 00:41:55.109] +++ command: run_impersonation_tests
I1207 00:41:55.118] +++ [1207 00:41:55] Testing impersonation
I1207 00:41:55.179] Successful
I1207 00:41:55.179] message:error: requesting groups or user-extra for  without impersonating a user
I1207 00:41:55.179] has:without impersonating a user
I1207 00:41:55.317] certificatesigningrequest.certificates.k8s.io/foo created
I1207 00:41:55.404] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I1207 00:41:55.490] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I1207 00:41:55.562] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I1207 00:41:55.711] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 19 lines ...
W1207 00:41:56.171] I1207 00:41:56.168056   52249 crd_finalizer.go:254] Shutting down CRDFinalizer
W1207 00:41:56.172] I1207 00:41:56.168060   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.172] I1207 00:41:56.168068   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.172] I1207 00:41:56.168073   52249 crdregistration_controller.go:143] Shutting down crd-autoregister controller
W1207 00:41:56.172] I1207 00:41:56.168270   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.173] I1207 00:41:56.168310   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.173] W1207 00:41:56.168339   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.173] I1207 00:41:56.168405   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.173] I1207 00:41:56.168416   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.173] I1207 00:41:56.168438   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.174] I1207 00:41:56.168443   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.174] I1207 00:41:56.168488   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.174] I1207 00:41:56.168497   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 8 lines ...
W1207 00:41:56.175] I1207 00:41:56.168719   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.175] I1207 00:41:56.168873   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.175] I1207 00:41:56.168886   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.175] I1207 00:41:56.168892   52249 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W1207 00:41:56.175] I1207 00:41:56.168954   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.175] I1207 00:41:56.168963   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.176] W1207 00:41:56.168989   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.176] I1207 00:41:56.169044   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.176] I1207 00:41:56.169053   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.176] I1207 00:41:56.169078   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.176] I1207 00:41:56.169090   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.176] I1207 00:41:56.169103   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.176] I1207 00:41:56.169110   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.177] W1207 00:41:56.169130   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.177] W1207 00:41:56.169160   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.177] W1207 00:41:56.169309   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.177] W1207 00:41:56.169358   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.177] W1207 00:41:56.169412   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.178] W1207 00:41:56.169537   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.178] W1207 00:41:56.169561   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.178] I1207 00:41:56.169692   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.178] I1207 00:41:56.169797   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.178] I1207 00:41:56.169897   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.178] I1207 00:41:56.171240   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.178] I1207 00:41:56.171253   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.178] I1207 00:41:56.171346   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.179] W1207 00:41:56.171366   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.179] I1207 00:41:56.171371   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.179] I1207 00:41:56.171401   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.179] I1207 00:41:56.171427   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.179] I1207 00:41:56.171440   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.179] I1207 00:41:56.171445   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.179] I1207 00:41:56.171453   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 27 lines ...
W1207 00:41:56.183] I1207 00:41:56.171697   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.183] I1207 00:41:56.171712   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.183] I1207 00:41:56.170098   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.183] I1207 00:41:56.171731   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.183] I1207 00:41:56.170130   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.183] I1207 00:41:56.171764   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.183] W1207 00:41:56.170149   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.184] W1207 00:41:56.170454   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.184] W1207 00:41:56.170482   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.184] I1207 00:41:56.170594   52249 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W1207 00:41:56.184] I1207 00:41:56.170737   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.184] I1207 00:41:56.171824   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.184] I1207 00:41:56.170723   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.184] I1207 00:41:56.171847   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.184] I1207 00:41:56.171854   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 33 lines ...
W1207 00:41:56.189] I1207 00:41:56.172180   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.189] I1207 00:41:56.172199   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.189] I1207 00:41:56.172225   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.189] I1207 00:41:56.172229   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.189] I1207 00:41:56.172261   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.189] I1207 00:41:56.169906   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.189] W1207 00:41:56.172322   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.190] I1207 00:41:56.172344   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.190] I1207 00:41:56.169850   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.190] I1207 00:41:56.172423   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.190] I1207 00:41:56.172251   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.190] I1207 00:41:56.172473   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.190] I1207 00:41:56.172410   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.190] I1207 00:41:56.172533   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.190] W1207 00:41:56.172535   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.191] W1207 00:41:56.172559   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.191] I1207 00:41:56.172459   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.191] I1207 00:41:56.172589   52249 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 00:41:56.191] W1207 00:41:56.172618   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.191] W1207 00:41:56.172640   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.191] W1207 00:41:56.172670   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.192] W1207 00:41:56.172711   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.192] W1207 00:41:56.172720   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.192] W1207 00:41:56.172744   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.192] W1207 00:41:56.172771   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.192] W1207 00:41:56.172784   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.193] W1207 00:41:56.172807   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.193] W1207 00:41:56.172839   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.193] W1207 00:41:56.172678   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.193] W1207 00:41:56.172713   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.193] W1207 00:41:56.172745   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.193] W1207 00:41:56.172841   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.194] W1207 00:41:56.172887   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.194] W1207 00:41:56.172717   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.194] W1207 00:41:56.172842   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.194] W1207 00:41:56.172919   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.194] W1207 00:41:56.172869   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.195] W1207 00:41:56.172788   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.195] W1207 00:41:56.172942   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.195] W1207 00:41:56.172809   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.195] W1207 00:41:56.172955   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.195] W1207 00:41:56.172997   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.196] W1207 00:41:56.173004   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.196] W1207 00:41:56.173009   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.196] W1207 00:41:56.173031   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.196] W1207 00:41:56.173041   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.196] W1207 00:41:56.173073   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.196] W1207 00:41:56.173078   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.197] W1207 00:41:56.173042   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.197] W1207 00:41:56.173108   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.197] W1207 00:41:56.173124   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.197] W1207 00:41:56.173080   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.197] W1207 00:41:56.173153   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.198] W1207 00:41:56.173158   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.198] W1207 00:41:56.173149   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.198] W1207 00:41:56.173192   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.198] W1207 00:41:56.173206   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.198] W1207 00:41:56.173250   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.199] W1207 00:41:56.172979   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.199] W1207 00:41:56.173306   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.199] E1207 00:41:56.173382   52249 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W1207 00:41:56.199] W1207 00:41:56.173449   52249 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 00:41:56.209] /go/src/k8s.io/kubernetes/hack/lib/etcd.sh: line 94: 55610 Terminated              "${KUBE_OUTPUT_HOSTBIN}/kube-controller-manager" --port="${CTLRMGR_PORT}" --kube-api-content-type="${KUBE_TEST_API_TYPE-}" --master="127.0.0.1:${API_PORT}" 1>&2
W1207 00:41:56.215] + make test-integration
I1207 00:41:56.315] No resources found
I1207 00:41:56.316] pod "test-pod-1" force deleted
I1207 00:41:56.316] +++ [1207 00:41:56] TESTS PASSED
I1207 00:41:56.316] junit report dir: /workspace/artifacts
... skipping 11 lines ...
I1207 00:45:31.325] ok  	k8s.io/kubernetes/test/integration/apimachinery	168.696s
I1207 00:45:31.326] ok  	k8s.io/kubernetes/test/integration/apiserver	37.416s
I1207 00:45:31.327] [restful] 2018/12/07 00:44:21 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:43091/swaggerapi
I1207 00:45:31.327] [restful] 2018/12/07 00:44:21 log.go:33: [restful/swagger] https://127.0.0.1:43091/swaggerui/ is mapped to folder /swagger-ui/
I1207 00:45:31.327] [restful] 2018/12/07 00:44:23 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:43091/swaggerapi
I1207 00:45:31.328] [restful] 2018/12/07 00:44:23 log.go:33: [restful/swagger] https://127.0.0.1:43091/swaggerui/ is mapped to folder /swagger-ui/
I1207 00:45:31.328] FAIL	k8s.io/kubernetes/test/integration/auth	92.109s
I1207 00:45:31.328] [restful] 2018/12/07 00:43:15 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:45113/swaggerapi
I1207 00:45:31.328] [restful] 2018/12/07 00:43:15 log.go:33: [restful/swagger] https://127.0.0.1:45113/swaggerui/ is mapped to folder /swagger-ui/
I1207 00:45:31.329] [restful] 2018/12/07 00:43:18 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:45113/swaggerapi
I1207 00:45:31.329] [restful] 2018/12/07 00:43:18 log.go:33: [restful/swagger] https://127.0.0.1:45113/swaggerui/ is mapped to folder /swagger-ui/
I1207 00:45:31.329] [restful] 2018/12/07 00:43:25 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:42355/swaggerapi
I1207 00:45:31.329] [restful] 2018/12/07 00:43:25 log.go:33: [restful/swagger] https://127.0.0.1:42355/swaggerui/ is mapped to folder /swagger-ui/
... skipping 224 lines ...
I1207 00:55:01.688] [restful] 2018/12/07 00:47:22 log.go:33: [restful/swagger] https://127.0.0.1:45571/swaggerui/ is mapped to folder /swagger-ui/
I1207 00:55:01.688] ok  	k8s.io/kubernetes/test/integration/tls	12.923s
I1207 00:55:01.688] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.010s
I1207 00:55:01.688] ok  	k8s.io/kubernetes/test/integration/volume	90.778s
I1207 00:55:01.688] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	140.787s
I1207 00:55:03.073] +++ [1207 00:55:03] Saved JUnit XML test report to /workspace/artifacts/junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-004204.xml
I1207 00:55:03.076] Makefile:184: recipe for target 'test' failed
I1207 00:55:03.085] +++ [1207 00:55:03] Cleaning up etcd
W1207 00:55:03.186] make[1]: *** [test] Error 1
W1207 00:55:03.186] !!! [1207 00:55:03] Call tree:
W1207 00:55:03.186] !!! [1207 00:55:03]  1: hack/make-rules/test-integration.sh:105 runTests(...)
W1207 00:55:03.260] make: *** [test-integration] Error 1
I1207 00:55:03.361] +++ [1207 00:55:03] Integration test cleanup complete
I1207 00:55:03.361] Makefile:203: recipe for target 'test-integration' failed
W1207 00:55:04.402] Traceback (most recent call last):
W1207 00:55:04.402]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 167, in <module>
W1207 00:55:04.402]     main(ARGS.branch, ARGS.script, ARGS.force, ARGS.prow)
W1207 00:55:04.402]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 136, in main
W1207 00:55:04.402]     check(*cmd)
W1207 00:55:04.403]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W1207 00:55:04.403]     subprocess.check_call(cmd)
W1207 00:55:04.403]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1207 00:55:04.403]     raise CalledProcessError(retcode, cmd)
W1207 00:55:04.404] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181105-ceed87206', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E1207 00:55:04.408] Command failed
I1207 00:55:04.408] process 687 exited with code 1 after 24.5m
E1207 00:55:04.408] FAIL: pull-kubernetes-integration
I1207 00:55:04.409] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1207 00:55:04.899] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1207 00:55:04.954] process 124199 exited with code 0 after 0.0m
I1207 00:55:04.955] Call:  gcloud config get-value account
I1207 00:55:05.184] process 124212 exited with code 0 after 0.0m
I1207 00:55:05.185] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 00:55:05.185] Upload result and artifacts...
I1207 00:55:05.185] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/37800
I1207 00:55:05.185] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/37800/artifacts
W1207 00:55:06.870] CommandException: One or more URLs matched no objects.
E1207 00:55:07.040] Command failed
I1207 00:55:07.040] process 124225 exited with code 1 after 0.0m
W1207 00:55:07.040] Remote dir gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/37800/artifacts not exist yet
I1207 00:55:07.041] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/37800/artifacts
I1207 00:55:10.223] process 124370 exited with code 0 after 0.1m
W1207 00:55:10.224] metadata path /workspace/_artifacts/metadata.json does not exist
W1207 00:55:10.224] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...