ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-11 15:49
Elapsed28m53s
Revision
Buildergke-prow-containerd-pool-99179761-zl0r
pod5c96e52e-15b8-11e9-ada6-0a580a6c0160
infra-commit2435ec28a
pod5c96e52e-15b8-11e9-ada6-0a580a6c0160
repok8s.io/kubernetes
repo-commit33a9c6e892f69e20be9527ba00bd33dfa5de221b
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/client TestAtomicPut 13s

go test -v k8s.io/kubernetes/test/integration/client -run TestAtomicPut$
I0111 16:07:03.159829  116902 crd_finalizer.go:254] Shutting down CRDFinalizer
I0111 16:07:03.159977  116902 crdregistration_controller.go:143] Shutting down crd-autoregister controller
I0111 16:07:03.159998  116902 naming_controller.go:295] Shutting down NamingConditionController
I0111 16:07:03.160015  116902 available_controller.go:328] Shutting down AvailableConditionController
I0111 16:07:03.160047  116902 establishing_controller.go:84] Shutting down EstablishingController
I0111 16:07:03.160065  116902 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
I0111 16:07:03.160081  116902 customresource_discovery_controller.go:214] Shutting down DiscoveryController
I0111 16:07:03.160097  116902 autoregister_controller.go:160] Shutting down autoregister controller
I0111 16:07:03.160429  116902 secure_serving.go:156] Stopped listening on 127.0.0.1:46791
I0111 16:07:03.160439  116902 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0111 16:07:03.160443  116902 controller.go:90] Shutting down OpenAPI AggregationController
E0111 16:07:03.162817  116902 controller.go:172] Get https://127.0.0.1:46791/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:46791: connect: connection refused
I0111 16:07:03.163685  116902 serving.go:311] Generated self-signed cert (/tmp/kubernetes-kube-apiserver973976647/apiserver.crt, /tmp/kubernetes-kube-apiserver973976647/apiserver.key)
I0111 16:07:03.163708  116902 server.go:562] external host was not specified, using 127.0.0.1
W0111 16:07:03.163719  116902 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0111 16:07:03.901544  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.901638  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.901654  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.901924  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.903001  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.903052  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.903083  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.903118  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.903327  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.903523  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 16:07:03.904044  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 16:07:03.904075  116902 plugins.go:158] Loaded 6 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0111 16:07:03.904093  116902 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0111 16:07:03.905533  116902 plugins.go:158] Loaded 6 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0111 16:07:03.905556  116902 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0111 16:07:03.907724  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.907751  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.907809  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.907875  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.908530  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:07:03.933065  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 16:07:03.934906  116902 master.go:229] Using reconciler: lease
I0111 16:07:03.935069  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.935083  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.935123  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.935167  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.938158  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.938188  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.938239  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.938347  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.938400  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.939182  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.939213  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.939254  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.939326  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.939511  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.939948  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.940404  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.940465  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.940316  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.940510  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.940938  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.941258  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.941278  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.941308  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.941366  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.941842  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.941966  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.941993  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.942025  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.942228  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.943366  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.944046  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.944066  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.944136  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.944544  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.945107  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.945315  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.945366  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.945500  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.945569  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.946361  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.946852  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.946909  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.946958  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.947031  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.947840  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.947860  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.947889  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.947941  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.948096  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.948629  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.948644  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.948672  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.948716  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.948874  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.949342  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.949355  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.949383  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.949447  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.949615  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.950163  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.950178  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.950205  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.950249  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.950382  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.950899  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.950913  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.950939  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.950986  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.951129  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.951642  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.951654  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.951682  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.951729  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.951879  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.952276  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.952289  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.952312  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.952363  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.952493  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.955740  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.955918  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.955870  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.956237  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.956347  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.957030  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:03.957063  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:03.957097  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:03.957172  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.957206  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:03.958904  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.059071  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.059173  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.059250  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.059315  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.060014  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.060483  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.060500  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.060529  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.060600  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.061324  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.061347  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.061376  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.061437  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.061620  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.062367  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.062390  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.062419  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.062483  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.062653  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.063221  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.063234  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.063260  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.063318  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.063487  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.064162  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.064187  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.064215  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.064268  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.064425  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.065010  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.065026  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.065053  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.065105  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.065283  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.065917  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.065933  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.065962  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.066013  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.066173  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.066737  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.066753  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.066797  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.066844  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.067003  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.067570  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.067618  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.067649  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.067710  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.067877  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.068388  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.068423  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.068450  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.068497  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.068722  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.069244  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.069259  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.069287  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.069349  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.069492  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.070057  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.070073  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.070100  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.070147  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.070296  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.070834  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.070849  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.070876  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.070923  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.071075  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.071567  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.071614  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.071641  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.071694  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.071863  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.072400  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.072427  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.072453  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.072516  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.072714  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.076292  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.076451  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.076468  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.076507  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.076553  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.076879  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.077241  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.077259  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.077294  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.077563  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.077945  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.078281  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.078302  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.078334  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.078519  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.086341  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.086963  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.086988  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.087027  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.087084  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.088076  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.088245  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.088270  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.088378  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.088682  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.089454  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.089476  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.089534  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.089734  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.089919  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.091399  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.091636  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.091659  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.091697  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.091912  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.092481  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.092498  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.092526  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.092622  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.092801  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.093236  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.093252  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.093281  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.093336  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.093527  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.094237  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.094424  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.094557  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.094940  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.095408  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.096997  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.097049  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.097065  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.097094  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.097151  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.098238  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.098262  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.098301  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.098372  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.098503  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.099067  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.099092  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.099121  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.099185  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.099399  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.100437  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.100741  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.100803  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.100843  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.100879  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.101368  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.101390  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.101426  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.101487  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.101648  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.102832  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.102854  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.102887  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.102948  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.103111  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.103331  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.106204  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.106277  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.106764  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.106910  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.108377  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.108445  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.108492  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.108652  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.108910  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.110315  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.110345  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.110505  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.110377  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.110962  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.111544  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.111643  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.111707  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.111795  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.111994  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.114023  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.114636  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.114661  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.114701  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.114914  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.115886  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.116561  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.116610  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.116645  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.116730  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.119388  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.121384  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.121448  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.121482  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.121561  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.122356  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.122797  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.122827  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.122864  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.122921  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.124520  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.125158  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.125257  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.125311  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.125443  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.126255  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.126704  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.126730  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.126759  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.126849  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.127232  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.127629  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.127649  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.127734  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.127818  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.128536  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.128562  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.128689  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.128806  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.128900  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.129117  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.129442  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.129459  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.129490  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.129734  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.130295  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.130305  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.130321  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.130349  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.130411  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.130991  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.131005  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.131040  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.131099  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.131213  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.131919  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.131936  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.131988  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.132050  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.132230  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.132850  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.132872  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.132900  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.132962  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.133071  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.133463  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.133761  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.133807  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.133837  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.133882  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.134448  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.134479  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.135072  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.135154  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.135231  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.135933  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.136529  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.136601  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.136675  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.136799  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.137341  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.137857  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.137918  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.138030  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.138150  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.138768  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.138860  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.138906  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.138997  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.139057  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.139473  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.139483  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.139499  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.139528  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.139611  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.140370  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.140551  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.140572  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.140643  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.140703  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.147123  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.147544  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.147567  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.147644  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.147694  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.148536  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.901118  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:04.901155  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:04.901202  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:04.901270  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:04.901754  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
[restful] 2019/01/11 16:07:04 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:37341/swaggerapi
[restful] 2019/01/11 16:07:04 log.go:33: [restful/swagger] https://127.0.0.1:37341/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2019/01/11 16:07:08 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:37341/swaggerapi
[restful] 2019/01/11 16:07:08 log.go:33: [restful/swagger] https://127.0.0.1:37341/swaggerui/ is mapped to folder /swagger-ui/
I0111 16:07:08.392543  116902 plugins.go:158] Loaded 6 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0111 16:07:08.392593  116902 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
W0111 16:07:08.394359  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 16:07:08.394500  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:08.394519  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:08.394569  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:08.394670  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:08.395645  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:08.396132  116902 clientconn.go:551] parsed scheme: ""
I0111 16:07:08.396263  116902 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 16:07:08.400683  116902 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 16:07:08.400762  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 16:07:08.401647  116902 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:07:08.403923  116902 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 16:07:15.372218  116902 secure_serving.go:116] Serving securely on 127.0.0.1:37341
I0111 16:07:15.372299  116902 autoregister_controller.go:136] Starting autoregister controller
I0111 16:07:15.372315  116902 cache.go:32] Waiting for caches to sync for autoregister controller
I0111 16:07:15.372347  116902 available_controller.go:316] Starting AvailableConditionController
I0111 16:07:15.372386  116902 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0111 16:07:15.372696  116902 crd_finalizer.go:242] Starting CRDFinalizer
I0111 16:07:15.373442  116902 controller.go:84] Starting OpenAPI AggregationController
I0111 16:07:15.373483  116902 crdregistration_controller.go:112] Starting crd-autoregister controller
I0111 16:07:15.373548  116902 controller_utils.go:1021] Waiting for caches to sync for crd-autoregister controller
I0111 16:07:15.372299  116902 apiservice_controller.go:90] Starting APIServiceRegistrationController
I0111 16:07:15.373683  116902 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0111 16:07:15.379061  116902 customresource_discovery_controller.go:203] Starting DiscoveryController
I0111 16:07:15.379154  116902 naming_controller.go:284] Starting NamingConditionController
I0111 16:07:15.379190  116902 establishing_controller.go:73] Starting EstablishingController
W0111 16:07:15.410848  116902 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0111 16:07:15.411958  116902 controller.go:155] Unable to perform initial Kubernetes service initialization: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
W0111 16:07:15.419535  116902 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0111 16:07:15.420376  116902 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 16:07:15.472511  116902 cache.go:39] Caches are synced for AvailableConditionController controller
I0111 16:07:15.472665  116902 cache.go:39] Caches are synced for autoregister controller
I0111 16:07:15.474671  116902 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0111 16:07:15.474708  116902 controller_utils.go:1028] Caches are synced for crd-autoregister controller
I0111 16:07:16.379439  116902 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 16:07:16.384764  116902 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 16:07:16.384795  116902 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
testserver.go:142: runtime-config=map[api/all:true]
testserver.go:143: Starting kube-apiserver on port 37341...
testserver.go:155: Waiting for /healthz to be ok...
client_test.go:187: Unexpected error putting atomicRC: 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-160518.xml

Filter through log files | View test history on testgrid


Show 606 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0111 15:49:21.511] process 228 exited with code 0 after 0.0m
I0111 15:49:21.511] Call:  gcloud config get-value account
I0111 15:49:21.834] process 240 exited with code 0 after 0.0m
I0111 15:49:21.835] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 15:49:21.835] Call:  kubectl get -oyaml pods/5c96e52e-15b8-11e9-ada6-0a580a6c0160
W0111 15:49:21.920] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0111 15:49:21.922] Command failed
I0111 15:49:21.923] process 252 exited with code 1 after 0.0m
E0111 15:49:21.923] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/5c96e52e-15b8-11e9-ada6-0a580a6c0160']' returned non-zero exit status 1
I0111 15:49:21.923] Root: /workspace
I0111 15:49:21.923] cd to /workspace
I0111 15:49:21.923] Checkout: /workspace/k8s.io/kubernetes master to /workspace/k8s.io/kubernetes
I0111 15:49:21.924] Call:  git init k8s.io/kubernetes
... skipping 831 lines ...
W0111 16:00:08.235] I0111 16:00:08.235214   55966 controllermanager.go:516] Started "endpoint"
W0111 16:00:08.235] I0111 16:00:08.235300   55966 endpoints_controller.go:149] Starting endpoint controller
W0111 16:00:08.236] I0111 16:00:08.235323   55966 controller_utils.go:1021] Waiting for caches to sync for endpoint controller
W0111 16:00:08.236] I0111 16:00:08.236031   55966 controllermanager.go:516] Started "ttl"
W0111 16:00:08.237] I0111 16:00:08.236114   55966 ttl_controller.go:116] Starting TTL controller
W0111 16:00:08.237] I0111 16:00:08.236713   55966 controller_utils.go:1021] Waiting for caches to sync for TTL controller
W0111 16:00:08.237] E0111 16:00:08.236886   55966 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0111 16:00:08.237] W0111 16:00:08.237019   55966 controllermanager.go:508] Skipping "service"
W0111 16:00:08.237] I0111 16:00:08.237083   55966 core.go:169] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0111 16:00:08.237] W0111 16:00:08.237107   55966 controllermanager.go:508] Skipping "route"
W0111 16:00:08.238] I0111 16:00:08.237891   55966 controllermanager.go:516] Started "pvc-protection"
W0111 16:00:08.238] I0111 16:00:08.237952   55966 pvc_protection_controller.go:99] Starting PVC protection controller
W0111 16:00:08.238] I0111 16:00:08.238023   55966 controller_utils.go:1021] Waiting for caches to sync for PVC protection controller
... skipping 8 lines ...
W0111 16:00:08.357] I0111 16:00:08.356278   55966 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 16:00:08.357] I0111 16:00:08.356314   55966 graph_builder.go:308] GraphBuilder running
W0111 16:00:08.358] I0111 16:00:08.356960   55966 controllermanager.go:516] Started "replicaset"
W0111 16:00:08.358] I0111 16:00:08.357008   55966 replica_set.go:182] Starting replicaset controller
W0111 16:00:08.358] I0111 16:00:08.357163   55966 controller_utils.go:1021] Waiting for caches to sync for ReplicaSet controller
W0111 16:00:08.358] I0111 16:00:08.357428   55966 node_lifecycle_controller.go:77] Sending events to api server
W0111 16:00:08.358] E0111 16:00:08.357478   55966 core.go:159] failed to start cloud node lifecycle controller: no cloud provider provided
W0111 16:00:08.358] W0111 16:00:08.357494   55966 controllermanager.go:508] Skipping "cloudnodelifecycle"
W0111 16:00:08.359] I0111 16:00:08.358373   55966 controllermanager.go:516] Started "horizontalpodautoscaling"
W0111 16:00:08.359] I0111 16:00:08.358517   55966 horizontal.go:156] Starting HPA controller
W0111 16:00:08.359] I0111 16:00:08.358571   55966 controller_utils.go:1021] Waiting for caches to sync for HPA controller
W0111 16:00:08.359] I0111 16:00:08.359075   55966 controllermanager.go:516] Started "statefulset"
W0111 16:00:08.359] I0111 16:00:08.359257   55966 stateful_set.go:151] Starting stateful set controller
... skipping 23 lines ...
W0111 16:00:08.417] I0111 16:00:08.415014   55966 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
W0111 16:00:08.417] I0111 16:00:08.415252   55966 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
W0111 16:00:08.417] I0111 16:00:08.415396   55966 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
W0111 16:00:08.417] I0111 16:00:08.415445   55966 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W0111 16:00:08.417] I0111 16:00:08.415499   55966 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0111 16:00:08.417] I0111 16:00:08.415553   55966 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
W0111 16:00:08.418] E0111 16:00:08.415622   55966 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 16:00:08.418] I0111 16:00:08.415664   55966 controllermanager.go:516] Started "resourcequota"
W0111 16:00:08.418] I0111 16:00:08.416046   55966 controllermanager.go:516] Started "csrcleaner"
W0111 16:00:08.418] I0111 16:00:08.416184   55966 resource_quota_controller.go:276] Starting resource quota controller
W0111 16:00:08.418] I0111 16:00:08.416213   55966 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0111 16:00:08.418] I0111 16:00:08.416236   55966 resource_quota_monitor.go:301] QuotaMonitor running
W0111 16:00:08.418] I0111 16:00:08.416343   55966 cleaner.go:81] Starting CSR cleaner controller
... skipping 13 lines ...
W0111 16:00:08.537] I0111 16:00:08.537065   55966 controller_utils.go:1028] Caches are synced for TTL controller
W0111 16:00:08.548] I0111 16:00:08.548000   55966 controller_utils.go:1028] Caches are synced for namespace controller
W0111 16:00:08.549] I0111 16:00:08.548919   55966 controller_utils.go:1028] Caches are synced for service account controller
W0111 16:00:08.551] I0111 16:00:08.551264   52622 controller.go:606] quota admission added evaluator for: serviceaccounts
I0111 16:00:08.690] +++ [0111 16:00:08] On try 3, controller-manager: ok
W0111 16:00:08.820] I0111 16:00:08.819815   55966 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0111 16:00:08.834] E0111 16:00:08.833961   55966 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0111 16:00:08.919] I0111 16:00:08.919200   55966 controller_utils.go:1028] Caches are synced for disruption controller
W0111 16:00:08.920] I0111 16:00:08.919244   55966 disruption.go:294] Sending events to api server.
W0111 16:00:08.922] W0111 16:00:08.921598   55966 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0111 16:00:08.926] I0111 16:00:08.925756   55966 controller_utils.go:1028] Caches are synced for GC controller
W0111 16:00:08.926] I0111 16:00:08.926501   55966 controller_utils.go:1028] Caches are synced for daemon sets controller
W0111 16:00:08.927] I0111 16:00:08.926842   55966 controller_utils.go:1028] Caches are synced for job controller
W0111 16:00:08.928] I0111 16:00:08.928032   55966 controller_utils.go:1028] Caches are synced for taint controller
W0111 16:00:08.928] I0111 16:00:08.928206   55966 node_lifecycle_controller.go:1162] Initializing eviction metric for zone: 
W0111 16:00:08.929] I0111 16:00:08.928277   55966 node_lifecycle_controller.go:1012] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
... skipping 37 lines ...
I0111 16:00:09.592]   "goVersion": "go1.11.4",
I0111 16:00:09.592]   "compiler": "gc",
I0111 16:00:09.592]   "platform": "linux/amd64"
I0111 16:00:09.746] }+++ [0111 16:00:09] Testing kubectl version: check client only output matches expected output
W0111 16:00:09.852] I0111 16:00:09.851443   55966 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 16:00:09.952] I0111 16:00:09.951883   55966 controller_utils.go:1028] Caches are synced for garbage collector controller
W0111 16:00:09.964] E0111 16:00:09.963924   55966 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0111 16:00:10.066] Successful: the flag '--client' shows correct client info
I0111 16:00:10.066] (BSuccessful: the flag '--client' correctly has no server version info
I0111 16:00:10.066] (B+++ [0111 16:00:10] Testing kubectl version: verify json output
I0111 16:00:10.240] Successful: --output json has correct client info
I0111 16:00:10.246] (BSuccessful: --output json has correct server info
I0111 16:00:10.249] (B+++ [0111 16:00:10] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
... skipping 51 lines ...
I0111 16:00:13.464] +++ working dir: /go/src/k8s.io/kubernetes
I0111 16:00:13.466] +++ command: run_RESTMapper_evaluation_tests
I0111 16:00:13.476] +++ [0111 16:00:13] Creating namespace namespace-1547222413-950
I0111 16:00:13.546] namespace/namespace-1547222413-950 created
I0111 16:00:13.610] Context "test" modified.
I0111 16:00:13.615] +++ [0111 16:00:13] Testing RESTMapper
I0111 16:00:13.721] +++ [0111 16:00:13] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0111 16:00:13.736] +++ exit code: 0
I0111 16:00:13.848] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0111 16:00:13.848] bindings                                                                      true         Binding
I0111 16:00:13.849] componentstatuses                 cs                                          false        ComponentStatus
I0111 16:00:13.849] configmaps                        cm                                          true         ConfigMap
I0111 16:00:13.849] endpoints                         ep                                          true         Endpoints
... skipping 606 lines ...
I0111 16:00:33.160] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0111 16:00:33.255] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0111 16:00:33.327] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0111 16:00:33.421] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0111 16:00:33.587] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:00:33.780] (Bpod/env-test-pod created
W0111 16:00:33.881] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0111 16:00:33.882] error: setting 'all' parameter but found a non empty selector. 
W0111 16:00:33.882] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 16:00:33.882] I0111 16:00:32.824295   52622 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0111 16:00:33.882] error: min-available and max-unavailable cannot be both specified
I0111 16:00:33.983] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0111 16:00:33.983] Name:               env-test-pod
I0111 16:00:33.983] Namespace:          test-kubectl-describe-pod
I0111 16:00:33.983] Priority:           0
I0111 16:00:33.983] PriorityClassName:  <none>
I0111 16:00:33.983] Node:               <none>
... skipping 145 lines ...
W0111 16:00:46.975] I0111 16:00:45.791908   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222440-30160", Name:"modified", UID:"0e316ab5-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-d9xgk
W0111 16:00:46.975] I0111 16:00:46.508819   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222440-30160", Name:"modified", UID:"0ea0bd24-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-5dtsz
I0111 16:00:47.156] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:00:47.311] (Bpod/valid-pod created
I0111 16:00:47.418] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 16:00:47.598] (BSuccessful
I0111 16:00:47.598] message:Error from server: cannot restore map from string
I0111 16:00:47.598] has:cannot restore map from string
I0111 16:00:47.690] Successful
I0111 16:00:47.690] message:pod/valid-pod patched (no change)
I0111 16:00:47.690] has:patched (no change)
I0111 16:00:47.782] pod/valid-pod patched
I0111 16:00:47.880] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0111 16:00:48.441] (Bpod/valid-pod patched
I0111 16:00:48.547] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0111 16:00:48.625] (Bpod/valid-pod patched
I0111 16:00:48.720] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0111 16:00:48.876] (Bpod/valid-pod patched
I0111 16:00:48.975] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 16:00:49.145] (B+++ [0111 16:00:49] "kubectl patch with resourceVersion 493" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0111 16:00:49.245] E0111 16:00:47.590023   52622 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0111 16:00:49.372] pod "valid-pod" deleted
I0111 16:00:49.383] pod/valid-pod replaced
I0111 16:00:49.476] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0111 16:00:49.628] (BSuccessful
I0111 16:00:49.628] message:error: --grace-period must have --force specified
I0111 16:00:49.628] has:\-\-grace-period must have \-\-force specified
I0111 16:00:49.794] Successful
I0111 16:00:49.795] message:error: --timeout must have --force specified
I0111 16:00:49.795] has:\-\-timeout must have \-\-force specified
I0111 16:00:49.957] node/node-v1-test created
W0111 16:00:50.058] W0111 16:00:49.957200   55966 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0111 16:00:50.158] node/node-v1-test replaced
I0111 16:00:50.216] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0111 16:00:50.302] (Bnode "node-v1-test" deleted
I0111 16:00:50.403] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 16:00:50.683] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0111 16:00:52.887] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 57 lines ...
I0111 16:00:56.736] +++ [0111 16:00:56] Testing kubectl --save-config
I0111 16:00:56.742] +++ [0111 16:00:56] Creating namespace namespace-1547222456-19665
I0111 16:00:56.812] namespace/namespace-1547222456-19665 created
I0111 16:00:56.877] Context "test" modified.
I0111 16:00:56.969] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:00:57.129] (Bpod/test-pod created
W0111 16:00:57.229] error: 'name' already has a value (valid-pod), and --overwrite is false
W0111 16:00:57.230] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 16:00:57.230] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 16:00:57.330] pod "test-pod" deleted
I0111 16:00:57.330] +++ [0111 16:00:57] Creating namespace namespace-1547222457-14831
I0111 16:00:57.380] namespace/namespace-1547222457-14831 created
I0111 16:00:57.445] Context "test" modified.
... skipping 41 lines ...
I0111 16:01:00.490] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0111 16:01:00.495] +++ working dir: /go/src/k8s.io/kubernetes
I0111 16:01:00.498] +++ command: run_kubectl_create_error_tests
I0111 16:01:00.509] +++ [0111 16:01:00] Creating namespace namespace-1547222460-13227
I0111 16:01:00.578] namespace/namespace-1547222460-13227 created
I0111 16:01:00.654] Context "test" modified.
I0111 16:01:00.661] +++ [0111 16:01:00] Testing kubectl create with error
W0111 16:01:00.761] Error: required flag(s) "filename" not set
W0111 16:01:00.761] 
W0111 16:01:00.762] 
W0111 16:01:00.762] Examples:
W0111 16:01:00.762]   # Create a pod using the data in pod.json.
W0111 16:01:00.762]   kubectl create -f ./pod.json
W0111 16:01:00.762]   
... skipping 38 lines ...
W0111 16:01:00.768]   kubectl create -f FILENAME [options]
W0111 16:01:00.768] 
W0111 16:01:00.769] Use "kubectl <command> --help" for more information about a given command.
W0111 16:01:00.769] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0111 16:01:00.769] 
W0111 16:01:00.769] required flag(s) "filename" not set
I0111 16:01:00.890] +++ [0111 16:01:00] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0111 16:01:00.990] kubectl convert is DEPRECATED and will be removed in a future version.
W0111 16:01:00.991] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 16:01:01.091] +++ exit code: 0
I0111 16:01:01.111] Recording: run_kubectl_apply_tests
I0111 16:01:01.111] Running command: run_kubectl_apply_tests
I0111 16:01:01.129] 
... skipping 13 lines ...
I0111 16:01:02.209] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I0111 16:01:03.143] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I0111 16:01:03.229] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:03.390] (Bpod/selector-test-pod created
I0111 16:01:03.480] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 16:01:03.560] (BSuccessful
I0111 16:01:03.561] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 16:01:03.561] has:pods "selector-test-pod-dont-apply" not found
I0111 16:01:03.633] pod "selector-test-pod" deleted
I0111 16:01:03.719] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:03.930] (Bpod/test-pod created (server dry run)
I0111 16:01:04.022] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:04.196] (Bpod/test-pod created
... skipping 12 lines ...
W0111 16:01:05.110] I0111 16:01:05.109734   52622 clientconn.go:551] parsed scheme: ""
W0111 16:01:05.110] I0111 16:01:05.109768   52622 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 16:01:05.111] I0111 16:01:05.109821   52622 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 16:01:05.111] I0111 16:01:05.109871   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:01:05.111] I0111 16:01:05.111399   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:01:05.116] I0111 16:01:05.115983   52622 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0111 16:01:05.207] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0111 16:01:05.307] kind.mygroup.example.com/myobj created (server dry run)
I0111 16:01:05.308] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0111 16:01:05.381] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:05.561] (Bpod/a created
I0111 16:01:06.859] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0111 16:01:06.955] (BSuccessful
I0111 16:01:06.955] message:Error from server (NotFound): pods "b" not found
I0111 16:01:06.955] has:pods "b" not found
I0111 16:01:07.126] pod/b created
I0111 16:01:07.138] pod/a pruned
I0111 16:01:08.627] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0111 16:01:08.715] (BSuccessful
I0111 16:01:08.715] message:Error from server (NotFound): pods "a" not found
I0111 16:01:08.716] has:pods "a" not found
I0111 16:01:08.791] pod "b" deleted
I0111 16:01:08.880] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:09.027] (Bpod/a created
I0111 16:01:09.111] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0111 16:01:09.197] (BSuccessful
I0111 16:01:09.198] message:Error from server (NotFound): pods "b" not found
I0111 16:01:09.198] has:pods "b" not found
I0111 16:01:09.351] pod/b created
I0111 16:01:09.441] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0111 16:01:09.527] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0111 16:01:09.612] (Bpod "a" deleted
I0111 16:01:09.616] pod "b" deleted
I0111 16:01:09.780] Successful
I0111 16:01:09.780] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0111 16:01:09.781] has:all resources selected for prune without explicitly passing --all
I0111 16:01:09.946] pod/a created
I0111 16:01:09.953] pod/b created
I0111 16:01:09.967] service/prune-svc created
I0111 16:01:11.272] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0111 16:01:11.360] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 129 lines ...
I0111 16:01:23.418] Context "test" modified.
I0111 16:01:23.424] +++ [0111 16:01:23] Testing kubectl create filter
I0111 16:01:23.520] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:23.698] (Bpod/selector-test-pod created
I0111 16:01:23.799] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 16:01:23.880] (BSuccessful
I0111 16:01:23.880] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 16:01:23.880] has:pods "selector-test-pod-dont-apply" not found
I0111 16:01:23.961] pod "selector-test-pod" deleted
I0111 16:01:23.980] +++ exit code: 0
I0111 16:01:24.011] Recording: run_kubectl_apply_deployments_tests
I0111 16:01:24.011] Running command: run_kubectl_apply_deployments_tests
I0111 16:01:24.031] 
... skipping 28 lines ...
I0111 16:01:26.000] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:26.085] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:26.163] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:26.340] (Bdeployment.extensions/nginx created
I0111 16:01:26.440] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0111 16:01:30.667] (BSuccessful
I0111 16:01:30.667] message:Error from server (Conflict): error when applying patch:
I0111 16:01:30.668] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547222484-26408\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0111 16:01:30.668] to:
I0111 16:01:30.668] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0111 16:01:30.668] Name: "nginx", Namespace: "namespace-1547222484-26408"
I0111 16:01:30.670] Object: &{map["status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2019-01-11T16:01:26Z" "lastTransitionTime":"2019-01-11T16:01:26Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]]] "kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["name":"nginx" "resourceVersion":"710" "creationTimestamp":"2019-01-11T16:01:26Z" "namespace":"namespace-1547222484-26408" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547222484-26408/deployments/nginx" "uid":"265f1399-15ba-11e9-8e70-0242ac110002" "generation":'\x01' "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547222484-26408\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"]] "spec":map["replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["spec":map["containers":[map["name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"] "metadata":map["labels":map["name":"nginx1"] "creationTimestamp":<nil>]] "strategy":map["rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01'] "type":"RollingUpdate"] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647)]]}
I0111 16:01:30.670] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0111 16:01:30.670] has:Error from server (Conflict)
W0111 16:01:30.771] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0111 16:01:30.771] I0111 16:01:21.929851   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222480-2832", Name:"nginx-extensions", UID:"23bd67ed-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-extensions-6fb4b564f5 to 1
W0111 16:01:30.772] I0111 16:01:21.932698   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222480-2832", Name:"nginx-extensions-6fb4b564f5", UID:"23be044a-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-extensions-6fb4b564f5-dfwps
W0111 16:01:30.772] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0111 16:01:30.772] I0111 16:01:22.301299   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222480-2832", Name:"nginx-apps", UID:"23f61083-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-apps-c6bb96759 to 1
W0111 16:01:30.773] I0111 16:01:22.306246   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222480-2832", Name:"nginx-apps-c6bb96759", UID:"23f6a195-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-apps-c6bb96759-hfpwm
... skipping 15 lines ...
I0111 16:01:35.966]           "name": "nginx2"
I0111 16:01:35.966] has:"name": "nginx2"
W0111 16:01:36.066] I0111 16:01:35.878164   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222484-26408", Name:"nginx", UID:"2c0dcee2-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"734", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W0111 16:01:36.067] I0111 16:01:35.881733   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222484-26408", Name:"nginx-7777658b9d", UID:"2c0e678e-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-6h5mk
W0111 16:01:36.067] I0111 16:01:35.885092   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222484-26408", Name:"nginx-7777658b9d", UID:"2c0e678e-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-djj5s
W0111 16:01:36.067] I0111 16:01:35.885420   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222484-26408", Name:"nginx-7777658b9d", UID:"2c0e678e-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-zglxs
W0111 16:01:40.271] E0111 16:01:40.270422   55966 replica_set.go:450] Sync "namespace-1547222484-26408/nginx-7777658b9d" failed with Operation cannot be fulfilled on replicasets.apps "nginx-7777658b9d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547222484-26408/nginx-7777658b9d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 2c0e678e-15ba-11e9-8e70-0242ac110002, UID in object meta: 
W0111 16:01:41.204] I0111 16:01:41.203656   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222484-26408", Name:"nginx", UID:"2f3a758b-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"766", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W0111 16:01:41.208] I0111 16:01:41.207445   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222484-26408", Name:"nginx-7777658b9d", UID:"2f3b0c06-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"767", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-d9w5r
W0111 16:01:41.211] I0111 16:01:41.210892   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222484-26408", Name:"nginx-7777658b9d", UID:"2f3b0c06-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"767", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-cb4rz
W0111 16:01:41.212] I0111 16:01:41.211230   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222484-26408", Name:"nginx-7777658b9d", UID:"2f3b0c06-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"767", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-c7n9g
I0111 16:01:41.312] Successful
I0111 16:01:41.314] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 132 lines ...
I0111 16:01:43.211] +++ [0111 16:01:43] Creating namespace namespace-1547222503-29176
I0111 16:01:43.277] namespace/namespace-1547222503-29176 created
I0111 16:01:43.342] Context "test" modified.
I0111 16:01:43.347] +++ [0111 16:01:43] Testing kubectl get
I0111 16:01:43.428] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:43.518] (BSuccessful
I0111 16:01:43.519] message:Error from server (NotFound): pods "abc" not found
I0111 16:01:43.519] has:pods "abc" not found
I0111 16:01:43.603] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:43.687] (BSuccessful
I0111 16:01:43.687] message:Error from server (NotFound): pods "abc" not found
I0111 16:01:43.687] has:pods "abc" not found
I0111 16:01:43.774] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:43.852] (BSuccessful
I0111 16:01:43.853] message:{
I0111 16:01:43.853]     "apiVersion": "v1",
I0111 16:01:43.853]     "items": [],
... skipping 23 lines ...
I0111 16:01:44.173] has not:No resources found
I0111 16:01:44.251] Successful
I0111 16:01:44.251] message:NAME
I0111 16:01:44.251] has not:No resources found
I0111 16:01:44.332] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:44.433] (BSuccessful
I0111 16:01:44.433] message:error: the server doesn't have a resource type "foobar"
I0111 16:01:44.434] has not:No resources found
I0111 16:01:44.510] Successful
I0111 16:01:44.510] message:No resources found.
I0111 16:01:44.511] has:No resources found
I0111 16:01:44.586] Successful
I0111 16:01:44.587] message:
I0111 16:01:44.587] has not:No resources found
I0111 16:01:44.663] Successful
I0111 16:01:44.663] message:No resources found.
I0111 16:01:44.663] has:No resources found
I0111 16:01:44.754] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:44.837] (BSuccessful
I0111 16:01:44.837] message:Error from server (NotFound): pods "abc" not found
I0111 16:01:44.837] has:pods "abc" not found
I0111 16:01:44.839] FAIL!
I0111 16:01:44.839] message:Error from server (NotFound): pods "abc" not found
I0111 16:01:44.839] has not:List
I0111 16:01:44.839] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0111 16:01:44.948] Successful
I0111 16:01:44.948] message:I0111 16:01:44.902016   68061 loader.go:359] Config loaded from file /tmp/tmp.9M47Rg7p7K/.kube/config
I0111 16:01:44.948] I0111 16:01:44.902490   68061 loader.go:359] Config loaded from file /tmp/tmp.9M47Rg7p7K/.kube/config
I0111 16:01:44.949] I0111 16:01:44.903624   68061 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
... skipping 995 lines ...
I0111 16:01:48.399] }
I0111 16:01:48.477] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 16:01:48.720] (B<no value>Successful
I0111 16:01:48.720] message:valid-pod:
I0111 16:01:48.720] has:valid-pod:
I0111 16:01:48.803] Successful
I0111 16:01:48.803] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0111 16:01:48.803] 	template was:
I0111 16:01:48.804] 		{.missing}
I0111 16:01:48.804] 	object given to jsonpath engine was:
I0111 16:01:48.805] 		map[string]interface {}{"spec":map[string]interface {}{"schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2019-01-11T16:01:48Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1547222507-22143", "selfLink":"/api/v1/namespaces/namespace-1547222507-22143/pods/valid-pod", "uid":"3378c667-15ba-11e9-8e70-0242ac110002", "resourceVersion":"805"}}
I0111 16:01:48.805] has:missing is not found
I0111 16:01:48.889] Successful
I0111 16:01:48.889] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0111 16:01:48.889] 	template was:
I0111 16:01:48.889] 		{{.missing}}
I0111 16:01:48.889] 	raw data was:
I0111 16:01:48.890] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-11T16:01:48Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547222507-22143","resourceVersion":"805","selfLink":"/api/v1/namespaces/namespace-1547222507-22143/pods/valid-pod","uid":"3378c667-15ba-11e9-8e70-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0111 16:01:48.890] 	object given to template engine was:
I0111 16:01:48.891] 		map[apiVersion:v1 kind:Pod metadata:map[uid:3378c667-15ba-11e9-8e70-0242ac110002 creationTimestamp:2019-01-11T16:01:48Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547222507-22143 resourceVersion:805 selfLink:/api/v1/namespaces/namespace-1547222507-22143/pods/valid-pod] spec:map[dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]]]]] status:map[phase:Pending qosClass:Guaranteed]]
I0111 16:01:48.891] has:map has no entry for key "missing"
W0111 16:01:48.991] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0111 16:01:49.963] E0111 16:01:49.962615   68446 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0111 16:01:50.063] Successful
I0111 16:01:50.064] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 16:01:50.064] valid-pod   0/1     Pending   0          0s
I0111 16:01:50.064] has:STATUS
I0111 16:01:50.064] Successful
... skipping 80 lines ...
I0111 16:01:52.241]   terminationGracePeriodSeconds: 30
I0111 16:01:52.241] status:
I0111 16:01:52.241]   phase: Pending
I0111 16:01:52.241]   qosClass: Guaranteed
I0111 16:01:52.241] has:name: valid-pod
I0111 16:01:52.241] Successful
I0111 16:01:52.241] message:Error from server (NotFound): pods "invalid-pod" not found
I0111 16:01:52.241] has:"invalid-pod" not found
I0111 16:01:52.317] pod "valid-pod" deleted
I0111 16:01:52.417] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:52.582] (Bpod/redis-master created
I0111 16:01:52.585] pod/valid-pod created
I0111 16:01:52.689] Successful
... skipping 296 lines ...
I0111 16:01:55.639] message:NAME
I0111 16:01:55.639] sample-role
I0111 16:01:55.639] has:NAME
I0111 16:01:55.639] sample-role
W0111 16:01:55.740] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0111 16:01:55.740] I0111 16:01:55.088299   55966 event.go:221] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1547222512-3718", Name:"pi", UID:"378138f7-15ba-11e9-8e70-0242ac110002", APIVersion:"batch/v1", ResourceVersion:"843", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-pg5wc
W0111 16:01:55.802] E0111 16:01:55.802294   52622 autoregister_controller.go:190] v1.company.com failed with : apiservices.apiregistration.k8s.io "v1.company.com" already exists
I0111 16:01:55.903] customresourcedefinition.apiextensions.k8s.io/foos.company.com created
I0111 16:01:55.903] old-print.sh:120: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: foos.company.com:
I0111 16:01:55.993] (Bold-print.sh:123: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:01:56.143] (BSuccessful
I0111 16:01:56.143] message:
I0111 16:01:56.143] has:
... skipping 9 lines ...
I0111 16:01:56.786] Running command: run_create_secret_tests
I0111 16:01:56.810] 
I0111 16:01:56.812] +++ Running case: test-cmd.run_create_secret_tests 
I0111 16:01:56.814] +++ working dir: /go/src/k8s.io/kubernetes
I0111 16:01:56.816] +++ command: run_create_secret_tests
I0111 16:01:56.907] Successful
I0111 16:01:56.907] message:Error from server (NotFound): secrets "mysecret" not found
I0111 16:01:56.908] has:secrets "mysecret" not found
W0111 16:01:57.008] I0111 16:01:55.983282   52622 clientconn.go:551] parsed scheme: ""
W0111 16:01:57.008] I0111 16:01:55.983309   52622 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 16:01:57.009] I0111 16:01:55.983343   52622 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 16:01:57.009] I0111 16:01:55.983403   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:01:57.009] I0111 16:01:55.983796   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:01:57.009] No resources found.
W0111 16:01:57.010] No resources found.
I0111 16:01:57.110] Successful
I0111 16:01:57.110] message:Error from server (NotFound): secrets "mysecret" not found
I0111 16:01:57.110] has:secrets "mysecret" not found
I0111 16:01:57.111] Successful
I0111 16:01:57.111] message:user-specified
I0111 16:01:57.111] has:user-specified
I0111 16:01:57.125] Successful
I0111 16:01:57.198] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"38c36c9e-15ba-11e9-8e70-0242ac110002","resourceVersion":"880","creationTimestamp":"2019-01-11T16:01:57Z"}}
... skipping 80 lines ...
I0111 16:01:59.067] has:Timeout exceeded while reading body
I0111 16:01:59.142] Successful
I0111 16:01:59.143] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 16:01:59.143] valid-pod   0/1     Pending   0          2s
I0111 16:01:59.143] has:valid-pod
I0111 16:01:59.206] Successful
I0111 16:01:59.206] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0111 16:01:59.206] has:Invalid timeout value
I0111 16:01:59.278] pod "valid-pod" deleted
I0111 16:01:59.294] +++ exit code: 0
I0111 16:01:59.324] Recording: run_crd_tests
I0111 16:01:59.325] Running command: run_crd_tests
I0111 16:01:59.343] 
... skipping 166 lines ...
I0111 16:02:03.613] foo.company.com/test patched
I0111 16:02:03.714] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0111 16:02:03.788] (Bfoo.company.com/test patched
I0111 16:02:03.873] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0111 16:02:03.954] (Bfoo.company.com/test patched
I0111 16:02:04.037] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0111 16:02:04.182] (B+++ [0111 16:02:04] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0111 16:02:04.244] {
I0111 16:02:04.244]     "apiVersion": "company.com/v1",
I0111 16:02:04.245]     "kind": "Foo",
I0111 16:02:04.245]     "metadata": {
I0111 16:02:04.245]         "annotations": {
I0111 16:02:04.245]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W0111 16:02:05.821] I0111 16:02:02.009345   52622 controller.go:606] quota admission added evaluator for: foos.company.com
W0111 16:02:05.821] I0111 16:02:05.442714   52622 controller.go:606] quota admission added evaluator for: bars.company.com
W0111 16:02:05.821] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 70885 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0111 16:02:05.821] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 70886 Killed                  while [ ${tries} -lt 10 ]; do
W0111 16:02:05.822]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0111 16:02:05.822] done
W0111 16:02:10.277] E0111 16:02:10.276441   55966 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos"]
W0111 16:02:10.414] I0111 16:02:10.413323   55966 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 16:02:10.419] I0111 16:02:10.415410   52622 clientconn.go:551] parsed scheme: ""
W0111 16:02:10.420] I0111 16:02:10.415448   52622 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 16:02:10.420] I0111 16:02:10.415489   52622 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 16:02:10.420] I0111 16:02:10.415544   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:02:10.421] I0111 16:02:10.417035   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 62 lines ...
I0111 16:02:16.633] (Bnamespace/non-native-resources created
I0111 16:02:16.793] bar.company.com/test created
I0111 16:02:16.890] crd.sh:456: Successful get bars {{len .items}}: 1
I0111 16:02:16.979] (Bnamespace "non-native-resources" deleted
I0111 16:02:22.396] crd.sh:459: Successful get bars {{len .items}}: 0
I0111 16:02:22.642] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0111 16:02:22.743] Error from server (NotFound): namespaces "non-native-resources" not found
I0111 16:02:22.844] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0111 16:02:22.949] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0111 16:02:23.118] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0111 16:02:23.162] +++ exit code: 0
I0111 16:02:23.216] Recording: run_cmd_with_img_tests
I0111 16:02:23.217] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0111 16:02:23.645] I0111 16:02:23.643633   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222543-22004", Name:"test1-fb488bd5d", UID:"48852fd1-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-pcnmk
I0111 16:02:23.746] Successful
I0111 16:02:23.746] message:deployment.apps/test1 created
I0111 16:02:23.747] has:deployment.apps/test1 created
I0111 16:02:23.781] deployment.extensions "test1" deleted
I0111 16:02:23.900] Successful
I0111 16:02:23.900] message:error: Invalid image name "InvalidImageName": invalid reference format
I0111 16:02:23.901] has:error: Invalid image name "InvalidImageName": invalid reference format
I0111 16:02:23.921] +++ exit code: 0
I0111 16:02:23.966] Recording: run_recursive_resources_tests
I0111 16:02:23.967] Running command: run_recursive_resources_tests
I0111 16:02:23.994] 
I0111 16:02:23.997] +++ Running case: test-cmd.run_recursive_resources_tests 
I0111 16:02:24.001] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0111 16:02:24.237] Context "test" modified.
I0111 16:02:24.373] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:02:24.780] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:24.783] (BSuccessful
I0111 16:02:24.783] message:pod/busybox0 created
I0111 16:02:24.783] pod/busybox1 created
I0111 16:02:24.784] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 16:02:24.784] has:error validating data: kind not set
I0111 16:02:24.934] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:25.187] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0111 16:02:25.190] (BSuccessful
I0111 16:02:25.191] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:25.191] has:Object 'Kind' is missing
I0111 16:02:25.349] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:25.803] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 16:02:25.805] (BSuccessful
I0111 16:02:25.806] message:pod/busybox0 replaced
I0111 16:02:25.806] pod/busybox1 replaced
I0111 16:02:25.807] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 16:02:25.807] has:error validating data: kind not set
I0111 16:02:25.953] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:26.096] (BSuccessful
I0111 16:02:26.097] message:Name:               busybox0
I0111 16:02:26.097] Namespace:          namespace-1547222544-4868
I0111 16:02:26.097] Priority:           0
I0111 16:02:26.097] PriorityClassName:  <none>
... skipping 159 lines ...
I0111 16:02:26.132] has:Object 'Kind' is missing
I0111 16:02:26.255] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:26.423] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0111 16:02:26.425] (BSuccessful
I0111 16:02:26.425] message:pod/busybox0 annotated
I0111 16:02:26.426] pod/busybox1 annotated
I0111 16:02:26.426] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:26.426] has:Object 'Kind' is missing
I0111 16:02:26.511] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:26.770] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 16:02:26.772] (BSuccessful
I0111 16:02:26.773] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 16:02:26.773] pod/busybox0 configured
I0111 16:02:26.773] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 16:02:26.773] pod/busybox1 configured
I0111 16:02:26.773] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 16:02:26.774] has:error validating data: kind not set
I0111 16:02:26.857] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:02:27.002] (Bdeployment.apps/nginx created
I0111 16:02:27.102] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0111 16:02:27.185] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 16:02:27.341] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0111 16:02:27.343] (BSuccessful
... skipping 42 lines ...
I0111 16:02:27.416] deployment.extensions "nginx" deleted
I0111 16:02:27.511] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:27.667] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:27.671] (BSuccessful
I0111 16:02:27.671] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0111 16:02:27.671] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 16:02:27.672] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:27.672] has:Object 'Kind' is missing
I0111 16:02:27.765] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:27.846] (BSuccessful
I0111 16:02:27.847] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:27.847] has:busybox0:busybox1:
I0111 16:02:27.848] Successful
I0111 16:02:27.849] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:27.849] has:Object 'Kind' is missing
I0111 16:02:27.932] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:28.017] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:28.102] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0111 16:02:28.105] (BSuccessful
I0111 16:02:28.105] message:pod/busybox0 labeled
I0111 16:02:28.105] pod/busybox1 labeled
I0111 16:02:28.106] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:28.106] has:Object 'Kind' is missing
I0111 16:02:28.193] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:28.272] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:28.357] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0111 16:02:28.359] (BSuccessful
I0111 16:02:28.359] message:pod/busybox0 patched
I0111 16:02:28.359] pod/busybox1 patched
I0111 16:02:28.360] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:28.360] has:Object 'Kind' is missing
I0111 16:02:28.445] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:28.616] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:02:28.618] (BSuccessful
I0111 16:02:28.619] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 16:02:28.619] pod "busybox0" force deleted
I0111 16:02:28.619] pod "busybox1" force deleted
I0111 16:02:28.619] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 16:02:28.619] has:Object 'Kind' is missing
I0111 16:02:28.710] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:02:28.869] (Breplicationcontroller/busybox0 created
I0111 16:02:28.886] replicationcontroller/busybox1 created
I0111 16:02:28.986] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:29.067] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:29.155] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 16:02:29.235] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 16:02:29.409] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 16:02:29.497] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 16:02:29.499] (BSuccessful
I0111 16:02:29.499] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0111 16:02:29.499] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0111 16:02:29.500] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:29.500] has:Object 'Kind' is missing
I0111 16:02:29.579] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0111 16:02:29.664] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0111 16:02:29.765] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:29.857] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 16:02:29.946] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 16:02:30.138] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 16:02:30.236] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 16:02:30.238] (BSuccessful
I0111 16:02:30.238] message:service/busybox0 exposed
I0111 16:02:30.238] service/busybox1 exposed
I0111 16:02:30.239] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:30.239] has:Object 'Kind' is missing
I0111 16:02:30.331] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:30.417] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 16:02:30.510] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 16:02:30.733] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0111 16:02:30.833] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0111 16:02:30.841] (BSuccessful
I0111 16:02:30.841] message:replicationcontroller/busybox0 scaled
I0111 16:02:30.842] replicationcontroller/busybox1 scaled
I0111 16:02:30.842] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:30.842] has:Object 'Kind' is missing
I0111 16:02:30.944] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:31.140] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:02:31.142] (BSuccessful
I0111 16:02:31.143] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 16:02:31.143] replicationcontroller "busybox0" force deleted
I0111 16:02:31.143] replicationcontroller "busybox1" force deleted
I0111 16:02:31.144] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:31.144] has:Object 'Kind' is missing
I0111 16:02:31.240] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:02:31.404] (Bdeployment.apps/nginx1-deployment created
I0111 16:02:31.411] deployment.apps/nginx0-deployment created
W0111 16:02:31.511] I0111 16:02:27.005822   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222544-4868", Name:"nginx", UID:"4a876aee-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1014", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W0111 16:02:31.512] I0111 16:02:27.008820   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222544-4868", Name:"nginx-6f6bb85d9c", UID:"4a87f1f4-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-5c9qv
W0111 16:02:31.512] I0111 16:02:27.011471   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222544-4868", Name:"nginx-6f6bb85d9c", UID:"4a87f1f4-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-tlv26
W0111 16:02:31.513] I0111 16:02:27.011677   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222544-4868", Name:"nginx-6f6bb85d9c", UID:"4a87f1f4-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-t9lqm
W0111 16:02:31.513] I0111 16:02:27.218420   55966 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0111 16:02:31.513] kubectl convert is DEPRECATED and will be removed in a future version.
W0111 16:02:31.513] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0111 16:02:31.513] I0111 16:02:28.872140   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222544-4868", Name:"busybox0", UID:"4ba43c42-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1045", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-jnmx2
W0111 16:02:31.514] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 16:02:31.514] I0111 16:02:28.888294   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222544-4868", Name:"busybox1", UID:"4ba6ef1f-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1050", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-6hzwh
W0111 16:02:31.514] I0111 16:02:30.619709   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222544-4868", Name:"busybox0", UID:"4ba43c42-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-m952d
W0111 16:02:31.515] I0111 16:02:30.628988   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222544-4868", Name:"busybox1", UID:"4ba6ef1f-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-x95wr
W0111 16:02:31.515] I0111 16:02:31.407562   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222544-4868", Name:"nginx1-deployment", UID:"4d270dde-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W0111 16:02:31.516] I0111 16:02:31.411859   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222544-4868", Name:"nginx1-deployment-75f6fc6747", UID:"4d2795ca-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-8bqlh
W0111 16:02:31.516] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 16:02:31.516] I0111 16:02:31.413425   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222544-4868", Name:"nginx0-deployment", UID:"4d27df30-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W0111 16:02:31.517] I0111 16:02:31.414699   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222544-4868", Name:"nginx1-deployment-75f6fc6747", UID:"4d2795ca-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-qjlj2
W0111 16:02:31.517] I0111 16:02:31.417154   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222544-4868", Name:"nginx0-deployment-b6bb4ccbb", UID:"4d288bd1-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1092", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-62t88
W0111 16:02:31.517] I0111 16:02:31.422002   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222544-4868", Name:"nginx0-deployment-b6bb4ccbb", UID:"4d288bd1-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1092", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-7hm7g
I0111 16:02:31.618] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0111 16:02:31.619] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 16:02:31.849] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 16:02:31.851] (BSuccessful
I0111 16:02:31.851] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0111 16:02:31.851] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0111 16:02:31.852] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 16:02:31.852] has:Object 'Kind' is missing
I0111 16:02:31.934] deployment.apps/nginx1-deployment paused
I0111 16:02:31.939] deployment.apps/nginx0-deployment paused
I0111 16:02:32.032] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0111 16:02:32.034] (BSuccessful
I0111 16:02:32.034] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0111 16:02:32.325] 1         <none>
I0111 16:02:32.325] 
I0111 16:02:32.325] deployment.apps/nginx0-deployment 
I0111 16:02:32.325] REVISION  CHANGE-CAUSE
I0111 16:02:32.325] 1         <none>
I0111 16:02:32.326] 
I0111 16:02:32.326] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 16:02:32.326] has:nginx0-deployment
I0111 16:02:32.326] Successful
I0111 16:02:32.326] message:deployment.apps/nginx1-deployment 
I0111 16:02:32.326] REVISION  CHANGE-CAUSE
I0111 16:02:32.327] 1         <none>
I0111 16:02:32.327] 
I0111 16:02:32.327] deployment.apps/nginx0-deployment 
I0111 16:02:32.327] REVISION  CHANGE-CAUSE
I0111 16:02:32.327] 1         <none>
I0111 16:02:32.327] 
I0111 16:02:32.328] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 16:02:32.328] has:nginx1-deployment
I0111 16:02:32.328] Successful
I0111 16:02:32.328] message:deployment.apps/nginx1-deployment 
I0111 16:02:32.328] REVISION  CHANGE-CAUSE
I0111 16:02:32.328] 1         <none>
I0111 16:02:32.328] 
I0111 16:02:32.328] deployment.apps/nginx0-deployment 
I0111 16:02:32.328] REVISION  CHANGE-CAUSE
I0111 16:02:32.329] 1         <none>
I0111 16:02:32.329] 
I0111 16:02:32.329] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 16:02:32.329] has:Object 'Kind' is missing
I0111 16:02:32.401] deployment.apps "nginx1-deployment" force deleted
I0111 16:02:32.406] deployment.apps "nginx0-deployment" force deleted
W0111 16:02:32.506] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 16:02:32.507] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 16:02:33.500] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:02:33.652] (Breplicationcontroller/busybox0 created
I0111 16:02:33.656] replicationcontroller/busybox1 created
I0111 16:02:33.755] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 16:02:33.840] (BSuccessful
I0111 16:02:33.840] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0111 16:02:33.842] message:no rollbacker has been implemented for "ReplicationController"
I0111 16:02:33.842] no rollbacker has been implemented for "ReplicationController"
I0111 16:02:33.842] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:33.842] has:Object 'Kind' is missing
I0111 16:02:33.927] Successful
I0111 16:02:33.928] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:33.928] error: replicationcontrollers "busybox0" pausing is not supported
I0111 16:02:33.928] error: replicationcontrollers "busybox1" pausing is not supported
I0111 16:02:33.928] has:Object 'Kind' is missing
I0111 16:02:33.930] Successful
I0111 16:02:33.930] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:33.930] error: replicationcontrollers "busybox0" pausing is not supported
I0111 16:02:33.931] error: replicationcontrollers "busybox1" pausing is not supported
I0111 16:02:33.931] has:replicationcontrollers "busybox0" pausing is not supported
I0111 16:02:33.933] Successful
I0111 16:02:33.933] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:33.933] error: replicationcontrollers "busybox0" pausing is not supported
I0111 16:02:33.933] error: replicationcontrollers "busybox1" pausing is not supported
I0111 16:02:33.934] has:replicationcontrollers "busybox1" pausing is not supported
I0111 16:02:34.017] Successful
I0111 16:02:34.018] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:34.018] error: replicationcontrollers "busybox0" resuming is not supported
I0111 16:02:34.018] error: replicationcontrollers "busybox1" resuming is not supported
I0111 16:02:34.018] has:Object 'Kind' is missing
I0111 16:02:34.019] Successful
I0111 16:02:34.020] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:34.020] error: replicationcontrollers "busybox0" resuming is not supported
I0111 16:02:34.020] error: replicationcontrollers "busybox1" resuming is not supported
I0111 16:02:34.020] has:replicationcontrollers "busybox0" resuming is not supported
I0111 16:02:34.022] Successful
I0111 16:02:34.023] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:34.023] error: replicationcontrollers "busybox0" resuming is not supported
I0111 16:02:34.023] error: replicationcontrollers "busybox1" resuming is not supported
I0111 16:02:34.023] has:replicationcontrollers "busybox0" resuming is not supported
I0111 16:02:34.094] replicationcontroller "busybox0" force deleted
I0111 16:02:34.100] replicationcontroller "busybox1" force deleted
W0111 16:02:34.200] I0111 16:02:33.655196   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222544-4868", Name:"busybox0", UID:"4e7e0e9f-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1135", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cf46b
W0111 16:02:34.201] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 16:02:34.201] I0111 16:02:33.659610   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222544-4868", Name:"busybox1", UID:"4e7ec575-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-s4sbx
W0111 16:02:34.201] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 16:02:34.202] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 16:02:35.118] +++ exit code: 0
I0111 16:02:35.156] Recording: run_namespace_tests
I0111 16:02:35.156] Running command: run_namespace_tests
I0111 16:02:35.174] 
I0111 16:02:35.176] +++ Running case: test-cmd.run_namespace_tests 
I0111 16:02:35.178] +++ working dir: /go/src/k8s.io/kubernetes
I0111 16:02:35.180] +++ command: run_namespace_tests
I0111 16:02:35.188] +++ [0111 16:02:35] Testing kubectl(v1:namespaces)
I0111 16:02:35.251] namespace/my-namespace created
I0111 16:02:35.335] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0111 16:02:35.405] (Bnamespace "my-namespace" deleted
W0111 16:02:40.329] E0111 16:02:40.328906   55966 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0111 16:02:40.505] namespace/my-namespace condition met
I0111 16:02:40.601] Successful
I0111 16:02:40.601] message:Error from server (NotFound): namespaces "my-namespace" not found
I0111 16:02:40.602] has: not found
W0111 16:02:40.702] I0111 16:02:40.569234   55966 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 16:02:40.702] I0111 16:02:40.669546   55966 controller_utils.go:1028] Caches are synced for garbage collector controller
I0111 16:02:40.803] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0111 16:02:40.803] (Bnamespace/other created
I0111 16:02:40.890] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0111 16:02:40.978] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:02:41.182] (Bpod/valid-pod created
I0111 16:02:41.282] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 16:02:41.374] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 16:02:41.455] (BSuccessful
I0111 16:02:41.455] message:error: a resource cannot be retrieved by name across all namespaces
I0111 16:02:41.456] has:a resource cannot be retrieved by name across all namespaces
I0111 16:02:41.543] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 16:02:41.639] (Bpod "valid-pod" force deleted
W0111 16:02:41.739] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 16:02:41.840] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:02:41.840] (Bnamespace "other" deleted
... skipping 117 lines ...
I0111 16:03:03.030] +++ command: run_client_config_tests
I0111 16:03:03.041] +++ [0111 16:03:03] Creating namespace namespace-1547222583-32709
I0111 16:03:03.119] namespace/namespace-1547222583-32709 created
I0111 16:03:03.199] Context "test" modified.
I0111 16:03:03.205] +++ [0111 16:03:03] Testing client config
I0111 16:03:03.275] Successful
I0111 16:03:03.275] message:error: stat missing: no such file or directory
I0111 16:03:03.275] has:missing: no such file or directory
I0111 16:03:03.345] Successful
I0111 16:03:03.345] message:error: stat missing: no such file or directory
I0111 16:03:03.345] has:missing: no such file or directory
I0111 16:03:03.419] Successful
I0111 16:03:03.419] message:error: stat missing: no such file or directory
I0111 16:03:03.419] has:missing: no such file or directory
I0111 16:03:03.490] Successful
I0111 16:03:03.491] message:Error in configuration: context was not found for specified context: missing-context
I0111 16:03:03.491] has:context was not found for specified context: missing-context
I0111 16:03:03.560] Successful
I0111 16:03:03.560] message:error: no server found for cluster "missing-cluster"
I0111 16:03:03.561] has:no server found for cluster "missing-cluster"
I0111 16:03:03.627] Successful
I0111 16:03:03.627] message:error: auth info "missing-user" does not exist
I0111 16:03:03.628] has:auth info "missing-user" does not exist
I0111 16:03:03.764] Successful
I0111 16:03:03.764] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0111 16:03:03.764] has:Error loading config file
I0111 16:03:03.831] Successful
I0111 16:03:03.832] message:error: stat missing-config: no such file or directory
I0111 16:03:03.832] has:no such file or directory
I0111 16:03:03.847] +++ exit code: 0
I0111 16:03:03.885] Recording: run_service_accounts_tests
I0111 16:03:03.886] Running command: run_service_accounts_tests
I0111 16:03:03.908] 
I0111 16:03:03.910] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 35 lines ...
I0111 16:03:10.929] Labels:                        run=pi
I0111 16:03:10.930] Annotations:                   <none>
I0111 16:03:10.930] Schedule:                      59 23 31 2 *
I0111 16:03:10.930] Concurrency Policy:            Allow
I0111 16:03:10.930] Suspend:                       False
I0111 16:03:10.930] Successful Job History Limit:  824642448872
I0111 16:03:10.930] Failed Job History Limit:      1
I0111 16:03:10.930] Starting Deadline Seconds:     <unset>
I0111 16:03:10.930] Selector:                      <unset>
I0111 16:03:10.931] Parallelism:                   <unset>
I0111 16:03:10.931] Completions:                   <unset>
I0111 16:03:10.931] Pod Template:
I0111 16:03:10.931]   Labels:  run=pi
... skipping 32 lines ...
I0111 16:03:11.538]                 job-name=test-job
I0111 16:03:11.538]                 run=pi
I0111 16:03:11.538] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0111 16:03:11.539] Parallelism:    1
I0111 16:03:11.539] Completions:    1
I0111 16:03:11.539] Start Time:     Fri, 11 Jan 2019 16:03:11 +0000
I0111 16:03:11.539] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0111 16:03:11.539] Pod Template:
I0111 16:03:11.539]   Labels:  controller-uid=64e4a61d-15ba-11e9-8e70-0242ac110002
I0111 16:03:11.539]            job-name=test-job
I0111 16:03:11.539]            run=pi
I0111 16:03:11.539]   Containers:
I0111 16:03:11.539]    pi:
... skipping 328 lines ...
I0111 16:03:22.662]   selector:
I0111 16:03:22.662]     role: padawan
I0111 16:03:22.662]   sessionAffinity: None
I0111 16:03:22.663]   type: ClusterIP
I0111 16:03:22.663] status:
I0111 16:03:22.663]   loadBalancer: {}
W0111 16:03:22.763] error: you must specify resources by --filename when --local is set.
W0111 16:03:22.764] Example resource specifications include:
W0111 16:03:22.764]    '-f rsrc.yaml'
W0111 16:03:22.764]    '--filename=rsrc.json'
I0111 16:03:22.871] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0111 16:03:23.094] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0111 16:03:23.197] (Bservice "redis-master" deleted
... skipping 93 lines ...
I0111 16:03:31.275] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 16:03:31.416] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 16:03:31.559] (Bdaemonset.extensions/bind rolled back
I0111 16:03:31.697] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 16:03:31.840] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 16:03:32.049] (BSuccessful
I0111 16:03:32.049] message:error: unable to find specified revision 1000000 in history
I0111 16:03:32.049] has:unable to find specified revision
I0111 16:03:32.146] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 16:03:32.276] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 16:03:32.418] (Bdaemonset.extensions/bind rolled back
W0111 16:03:32.522] E0111 16:03:32.439497   55966 daemon_controller.go:302] namespace-1547222608-3079/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1547222608-3079", SelfLink:"/apis/apps/v1/namespaces/namespace-1547222608-3079/daemonsets/bind", UID:"6fc1f657-15ba-11e9-8e70-0242ac110002", ResourceVersion:"1358", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63682819409, loc:(*time.Location)(0x6962be0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1547222608-3079\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"4"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc003dca840), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003d72bd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004456840), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc003dca8a0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000eef450)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc003d72cd0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
I0111 16:03:32.623] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0111 16:03:32.674] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 16:03:32.797] (Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 16:03:32.894] (Bdaemonset.apps "bind" deleted
I0111 16:03:32.919] +++ exit code: 0
I0111 16:03:32.955] Recording: run_rc_tests
... skipping 24 lines ...
I0111 16:03:34.398] Namespace:    namespace-1547222612-9117
I0111 16:03:34.398] Selector:     app=guestbook,tier=frontend
I0111 16:03:34.398] Labels:       app=guestbook
I0111 16:03:34.398]               tier=frontend
I0111 16:03:34.399] Annotations:  <none>
I0111 16:03:34.399] Replicas:     3 current / 3 desired
I0111 16:03:34.399] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:03:34.399] Pod Template:
I0111 16:03:34.400]   Labels:  app=guestbook
I0111 16:03:34.400]            tier=frontend
I0111 16:03:34.400]   Containers:
I0111 16:03:34.400]    php-redis:
I0111 16:03:34.401]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 16:03:34.546] Namespace:    namespace-1547222612-9117
I0111 16:03:34.547] Selector:     app=guestbook,tier=frontend
I0111 16:03:34.547] Labels:       app=guestbook
I0111 16:03:34.547]               tier=frontend
I0111 16:03:34.548] Annotations:  <none>
I0111 16:03:34.548] Replicas:     3 current / 3 desired
I0111 16:03:34.548] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:03:34.549] Pod Template:
I0111 16:03:34.549]   Labels:  app=guestbook
I0111 16:03:34.549]            tier=frontend
I0111 16:03:34.549]   Containers:
I0111 16:03:34.550]    php-redis:
I0111 16:03:34.550]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 16:03:34.702] Namespace:    namespace-1547222612-9117
I0111 16:03:34.702] Selector:     app=guestbook,tier=frontend
I0111 16:03:34.703] Labels:       app=guestbook
I0111 16:03:34.703]               tier=frontend
I0111 16:03:34.703] Annotations:  <none>
I0111 16:03:34.703] Replicas:     3 current / 3 desired
I0111 16:03:34.704] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:03:34.704] Pod Template:
I0111 16:03:34.704]   Labels:  app=guestbook
I0111 16:03:34.705]            tier=frontend
I0111 16:03:34.705]   Containers:
I0111 16:03:34.705]    php-redis:
I0111 16:03:34.705]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0111 16:03:34.856] Namespace:    namespace-1547222612-9117
I0111 16:03:34.856] Selector:     app=guestbook,tier=frontend
I0111 16:03:34.857] Labels:       app=guestbook
I0111 16:03:34.857]               tier=frontend
I0111 16:03:34.857] Annotations:  <none>
I0111 16:03:34.857] Replicas:     3 current / 3 desired
I0111 16:03:34.857] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:03:34.857] Pod Template:
I0111 16:03:34.857]   Labels:  app=guestbook
I0111 16:03:34.858]            tier=frontend
I0111 16:03:34.858]   Containers:
I0111 16:03:34.858]    php-redis:
I0111 16:03:34.858]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 16:03:35.040] Namespace:    namespace-1547222612-9117
I0111 16:03:35.040] Selector:     app=guestbook,tier=frontend
I0111 16:03:35.040] Labels:       app=guestbook
I0111 16:03:35.041]               tier=frontend
I0111 16:03:35.041] Annotations:  <none>
I0111 16:03:35.041] Replicas:     3 current / 3 desired
I0111 16:03:35.041] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:03:35.042] Pod Template:
I0111 16:03:35.042]   Labels:  app=guestbook
I0111 16:03:35.042]            tier=frontend
I0111 16:03:35.042]   Containers:
I0111 16:03:35.043]    php-redis:
I0111 16:03:35.043]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 16:03:35.169] Namespace:    namespace-1547222612-9117
I0111 16:03:35.169] Selector:     app=guestbook,tier=frontend
I0111 16:03:35.169] Labels:       app=guestbook
I0111 16:03:35.169]               tier=frontend
I0111 16:03:35.169] Annotations:  <none>
I0111 16:03:35.169] Replicas:     3 current / 3 desired
I0111 16:03:35.170] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:03:35.170] Pod Template:
I0111 16:03:35.170]   Labels:  app=guestbook
I0111 16:03:35.170]            tier=frontend
I0111 16:03:35.170]   Containers:
I0111 16:03:35.170]    php-redis:
I0111 16:03:35.170]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 16:03:35.307] Namespace:    namespace-1547222612-9117
I0111 16:03:35.307] Selector:     app=guestbook,tier=frontend
I0111 16:03:35.307] Labels:       app=guestbook
I0111 16:03:35.308]               tier=frontend
I0111 16:03:35.308] Annotations:  <none>
I0111 16:03:35.308] Replicas:     3 current / 3 desired
I0111 16:03:35.308] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:03:35.309] Pod Template:
I0111 16:03:35.309]   Labels:  app=guestbook
I0111 16:03:35.309]            tier=frontend
I0111 16:03:35.309]   Containers:
I0111 16:03:35.309]    php-redis:
I0111 16:03:35.310]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0111 16:03:35.455] Namespace:    namespace-1547222612-9117
I0111 16:03:35.456] Selector:     app=guestbook,tier=frontend
I0111 16:03:35.456] Labels:       app=guestbook
I0111 16:03:35.456]               tier=frontend
I0111 16:03:35.456] Annotations:  <none>
I0111 16:03:35.457] Replicas:     3 current / 3 desired
I0111 16:03:35.457] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:03:35.457] Pod Template:
I0111 16:03:35.458]   Labels:  app=guestbook
I0111 16:03:35.458]            tier=frontend
I0111 16:03:35.458]   Containers:
I0111 16:03:35.458]    php-redis:
I0111 16:03:35.459]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0111 16:03:36.328] (Breplicationcontroller/frontend scaled
I0111 16:03:36.416] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0111 16:03:36.498] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0111 16:03:36.582] (Breplicationcontroller/frontend scaled
I0111 16:03:36.683] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0111 16:03:36.759] (Breplicationcontroller "frontend" deleted
W0111 16:03:36.859] error: Expected replicas to be 3, was 2
W0111 16:03:36.860] I0111 16:03:36.331340   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222612-9117", Name:"frontend", UID:"7284a82a-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1400", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5pzx9
W0111 16:03:36.860] I0111 16:03:36.587860   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222612-9117", Name:"frontend", UID:"7284a82a-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1405", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-5pzx9
W0111 16:03:36.916] I0111 16:03:36.915794   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222612-9117", Name:"redis-master", UID:"7432cffe-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-64mt4
I0111 16:03:37.017] replicationcontroller/redis-master created
I0111 16:03:37.069] replicationcontroller/redis-slave created
W0111 16:03:37.170] I0111 16:03:37.072419   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222612-9117", Name:"redis-slave", UID:"744ab8a4-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-7gcw5
... skipping 36 lines ...
I0111 16:03:38.803] service "expose-test-deployment" deleted
I0111 16:03:38.901] Successful
I0111 16:03:38.901] message:service/expose-test-deployment exposed
I0111 16:03:38.901] has:service/expose-test-deployment exposed
I0111 16:03:38.975] service "expose-test-deployment" deleted
I0111 16:03:39.066] Successful
I0111 16:03:39.066] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 16:03:39.066] See 'kubectl expose -h' for help and examples
I0111 16:03:39.066] has:invalid deployment: no selectors
I0111 16:03:39.153] Successful
I0111 16:03:39.154] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 16:03:39.154] See 'kubectl expose -h' for help and examples
I0111 16:03:39.154] has:invalid deployment: no selectors
I0111 16:03:39.326] deployment.apps/nginx-deployment created
I0111 16:03:39.429] core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0111 16:03:39.526] (Bservice/nginx-deployment exposed
I0111 16:03:39.626] core.sh:1137: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
... skipping 23 lines ...
I0111 16:03:41.407] service "frontend" deleted
I0111 16:03:41.415] service "frontend-2" deleted
I0111 16:03:41.422] service "frontend-3" deleted
I0111 16:03:41.430] service "frontend-4" deleted
I0111 16:03:41.438] service "frontend-5" deleted
I0111 16:03:41.559] Successful
I0111 16:03:41.559] message:error: cannot expose a Node
I0111 16:03:41.559] has:cannot expose
I0111 16:03:41.714] Successful
I0111 16:03:41.714] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0111 16:03:41.714] has:metadata.name: Invalid value
I0111 16:03:41.821] Successful
I0111 16:03:41.822] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
W0111 16:03:44.187] I0111 16:03:43.691312   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222612-9117", Name:"frontend", UID:"783c8115-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1641", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-59jvj
W0111 16:03:44.188] I0111 16:03:43.694998   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222612-9117", Name:"frontend", UID:"783c8115-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1641", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tfrlx
W0111 16:03:44.188] I0111 16:03:43.695048   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222612-9117", Name:"frontend", UID:"783c8115-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"1641", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-29kvg
I0111 16:03:44.289] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 16:03:44.324] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 16:03:44.423] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 16:03:44.524] Error: required flag(s) "max" not set
W0111 16:03:44.524] 
W0111 16:03:44.524] 
W0111 16:03:44.524] Examples:
W0111 16:03:44.525]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 16:03:44.525]   kubectl autoscale deployment foo --min=2 --max=10
W0111 16:03:44.525]   
... skipping 54 lines ...
I0111 16:03:44.843]           limits:
I0111 16:03:44.844]             cpu: 300m
I0111 16:03:44.844]           requests:
I0111 16:03:44.844]             cpu: 300m
I0111 16:03:44.844]       terminationGracePeriodSeconds: 0
I0111 16:03:44.844] status: {}
W0111 16:03:44.944] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0111 16:03:45.137] deployment.apps/nginx-deployment-resources created
W0111 16:03:45.238] I0111 16:03:45.142224   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources", UID:"7919bdcb-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1663", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69c96fd869 to 3
W0111 16:03:45.238] I0111 16:03:45.146670   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources-69c96fd869", UID:"791a6375-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-l559n
W0111 16:03:45.239] I0111 16:03:45.150633   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources-69c96fd869", UID:"791a6375-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-8bp65
W0111 16:03:45.239] I0111 16:03:45.151099   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources-69c96fd869", UID:"791a6375-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-w65p4
I0111 16:03:45.340] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 5 lines ...
I0111 16:03:45.765] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0111 16:03:45.773] (Bcore.sh:1258: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0111 16:03:45.971] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0111 16:03:46.071] core.sh:1263: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0111 16:03:46.161] (Bcore.sh:1264: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0111 16:03:46.248] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0111 16:03:46.349] error: unable to find container named redis
W0111 16:03:46.350] I0111 16:03:45.982056   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources", UID:"7919bdcb-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W0111 16:03:46.350] I0111 16:03:45.987359   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources-69c96fd869", UID:"791a6375-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-l559n
W0111 16:03:46.350] I0111 16:03:45.989367   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources", UID:"7919bdcb-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1689", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 1
W0111 16:03:46.351] I0111 16:03:45.992628   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources-5f4579485f", UID:"79999a66-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-twlk7
W0111 16:03:46.351] I0111 16:03:46.259346   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources", UID:"7919bdcb-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1707", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5f4579485f to 0
W0111 16:03:46.352] I0111 16:03:46.267344   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222612-9117", Name:"nginx-deployment-resources-5f4579485f", UID:"79999a66-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1711", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5f4579485f-twlk7
... skipping 76 lines ...
I0111 16:03:46.711]     status: "True"
I0111 16:03:46.712]     type: Progressing
I0111 16:03:46.712]   observedGeneration: 4
I0111 16:03:46.712]   replicas: 4
I0111 16:03:46.712]   unavailableReplicas: 4
I0111 16:03:46.712]   updatedReplicas: 1
W0111 16:03:46.812] error: you must specify resources by --filename when --local is set.
W0111 16:03:46.813] Example resource specifications include:
W0111 16:03:46.813]    '-f rsrc.yaml'
W0111 16:03:46.813]    '--filename=rsrc.json'
I0111 16:03:46.913] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0111 16:03:46.974] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0111 16:03:47.062] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0111 16:03:48.567]                 pod-template-hash=55c9b846cc
I0111 16:03:48.567] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0111 16:03:48.567]                 deployment.kubernetes.io/max-replicas: 2
I0111 16:03:48.567]                 deployment.kubernetes.io/revision: 1
I0111 16:03:48.567] Controlled By:  Deployment/test-nginx-apps
I0111 16:03:48.567] Replicas:       1 current / 1 desired
I0111 16:03:48.567] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 16:03:48.567] Pod Template:
I0111 16:03:48.568]   Labels:  app=test-nginx-apps
I0111 16:03:48.568]            pod-template-hash=55c9b846cc
I0111 16:03:48.568]   Containers:
I0111 16:03:48.568]    nginx:
I0111 16:03:48.568]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
I0111 16:03:52.856] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0111 16:03:52.945] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 16:03:53.051] (Bdeployment.extensions/nginx rolled back
I0111 16:03:54.143] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 16:03:54.315] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 16:03:54.418] (Bdeployment.extensions/nginx rolled back
W0111 16:03:54.519] error: unable to find specified revision 1000000 in history
I0111 16:03:55.511] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 16:03:55.605] (Bdeployment.extensions/nginx paused
W0111 16:03:55.706] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0111 16:03:55.807] deployment.extensions/nginx resumed
I0111 16:03:55.903] deployment.extensions/nginx rolled back
I0111 16:03:56.075]     deployment.kubernetes.io/revision-history: 1,3
W0111 16:03:56.261] error: desired revision (3) is different from the running revision (5)
I0111 16:03:56.406] deployment.apps/nginx2 created
I0111 16:03:56.487] deployment.extensions "nginx2" deleted
I0111 16:03:56.566] deployment.extensions "nginx" deleted
I0111 16:03:56.655] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:03:56.824] (Bdeployment.apps/nginx-deployment created
I0111 16:03:56.918] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 18 lines ...
W0111 16:03:58.472] I0111 16:03:56.827610   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment", UID:"801129a4-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1944", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0111 16:03:58.473] I0111 16:03:56.830896   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment-646d4f779d", UID:"8011af92-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-xj6kq
W0111 16:03:58.473] I0111 16:03:56.833337   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment-646d4f779d", UID:"8011af92-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-brlxl
W0111 16:03:58.473] I0111 16:03:56.833601   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment-646d4f779d", UID:"8011af92-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-qg49m
W0111 16:03:58.474] I0111 16:03:57.188228   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment", UID:"801129a4-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1959", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W0111 16:03:58.474] I0111 16:03:57.191474   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment-85db47bbdb", UID:"8048b58e-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1960", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-tq2k4
W0111 16:03:58.474] error: unable to find container named "redis"
W0111 16:03:58.474] I0111 16:03:58.382344   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment", UID:"801129a4-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1977", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W0111 16:03:58.475] I0111 16:03:58.392433   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment-646d4f779d", UID:"8011af92-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-xj6kq
W0111 16:03:58.475] I0111 16:03:58.401197   55966 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment", UID:"801129a4-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 1
W0111 16:03:58.476] I0111 16:03:58.404804   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment-dc756cc6", UID:"80fd9239-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1987", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-l2gmn
I0111 16:03:58.576] apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 16:03:58.576] (Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 76 lines ...
I0111 16:04:03.094] Namespace:    namespace-1547222641-22113
I0111 16:04:03.094] Selector:     app=guestbook,tier=frontend
I0111 16:04:03.094] Labels:       app=guestbook
I0111 16:04:03.094]               tier=frontend
I0111 16:04:03.095] Annotations:  <none>
I0111 16:04:03.095] Replicas:     3 current / 3 desired
I0111 16:04:03.095] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:03.095] Pod Template:
I0111 16:04:03.095]   Labels:  app=guestbook
I0111 16:04:03.095]            tier=frontend
I0111 16:04:03.095]   Containers:
I0111 16:04:03.095]    php-redis:
I0111 16:04:03.096]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 16:04:03.196] Namespace:    namespace-1547222641-22113
I0111 16:04:03.196] Selector:     app=guestbook,tier=frontend
I0111 16:04:03.196] Labels:       app=guestbook
I0111 16:04:03.196]               tier=frontend
I0111 16:04:03.197] Annotations:  <none>
I0111 16:04:03.197] Replicas:     3 current / 3 desired
I0111 16:04:03.197] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:03.197] Pod Template:
I0111 16:04:03.197]   Labels:  app=guestbook
I0111 16:04:03.197]            tier=frontend
I0111 16:04:03.197]   Containers:
I0111 16:04:03.197]    php-redis:
I0111 16:04:03.198]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0111 16:04:03.296] Namespace:    namespace-1547222641-22113
I0111 16:04:03.296] Selector:     app=guestbook,tier=frontend
I0111 16:04:03.296] Labels:       app=guestbook
I0111 16:04:03.296]               tier=frontend
I0111 16:04:03.296] Annotations:  <none>
I0111 16:04:03.296] Replicas:     3 current / 3 desired
I0111 16:04:03.297] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:03.297] Pod Template:
I0111 16:04:03.297]   Labels:  app=guestbook
I0111 16:04:03.297]            tier=frontend
I0111 16:04:03.297]   Containers:
I0111 16:04:03.297]    php-redis:
I0111 16:04:03.297]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0111 16:04:03.402] Namespace:    namespace-1547222641-22113
I0111 16:04:03.402] Selector:     app=guestbook,tier=frontend
I0111 16:04:03.402] Labels:       app=guestbook
I0111 16:04:03.402]               tier=frontend
I0111 16:04:03.402] Annotations:  <none>
I0111 16:04:03.403] Replicas:     3 current / 3 desired
I0111 16:04:03.403] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:03.403] Pod Template:
I0111 16:04:03.403]   Labels:  app=guestbook
I0111 16:04:03.403]            tier=frontend
I0111 16:04:03.403]   Containers:
I0111 16:04:03.403]    php-redis:
I0111 16:04:03.403]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
I0111 16:04:03.405]   Type    Reason            Age   From                   Message
I0111 16:04:03.405]   ----    ------            ----  ----                   -------
I0111 16:04:03.405]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-lzvxl
I0111 16:04:03.405]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-bp9mh
I0111 16:04:03.405]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-656q9
I0111 16:04:03.405] (B
W0111 16:04:03.506] E0111 16:04:01.046422   55966 replica_set.go:450] Sync "namespace-1547222627-25400/nginx-deployment-669d4f8fc9" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-669d4f8fc9": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547222627-25400/nginx-deployment-669d4f8fc9, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8259d0ff-15ba-11e9-8e70-0242ac110002, UID in object meta: 
W0111 16:04:03.507] I0111 16:04:01.096484   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222627-25400", Name:"nginx-deployment-646d4f779d", UID:"817cef02-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-nxfx9
W0111 16:04:03.507] E0111 16:04:01.196409   55966 replica_set.go:450] Sync "namespace-1547222627-25400/nginx-deployment-7b8f7659b7" failed with replicasets.apps "nginx-deployment-7b8f7659b7" not found
W0111 16:04:03.507] E0111 16:04:01.346179   55966 replica_set.go:450] Sync "namespace-1547222627-25400/nginx-deployment-75bf89d86f" failed with replicasets.apps "nginx-deployment-75bf89d86f" not found
W0111 16:04:03.507] E0111 16:04:01.397565   55966 replica_set.go:450] Sync "namespace-1547222627-25400/nginx-deployment-646d4f779d" failed with replicasets.apps "nginx-deployment-646d4f779d" not found
W0111 16:04:03.508] I0111 16:04:01.686943   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend", UID:"82f659f2-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2131", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zjzwd
W0111 16:04:03.508] I0111 16:04:01.689906   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend", UID:"82f659f2-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2131", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9mk8j
W0111 16:04:03.508] I0111 16:04:01.690439   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend", UID:"82f659f2-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2131", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7btld
W0111 16:04:03.508] E0111 16:04:01.846263   55966 replica_set.go:450] Sync "namespace-1547222641-22113/frontend" failed with replicasets.apps "frontend" not found
W0111 16:04:03.509] I0111 16:04:02.089963   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend-no-cascade", UID:"83342137-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2146", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-pnmm5
W0111 16:04:03.509] I0111 16:04:02.092358   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend-no-cascade", UID:"83342137-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2146", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-fmdtl
W0111 16:04:03.509] I0111 16:04:02.093300   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend-no-cascade", UID:"83342137-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2146", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-hxgpb
W0111 16:04:03.510] E0111 16:04:02.345821   55966 replica_set.go:450] Sync "namespace-1547222641-22113/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W0111 16:04:03.510] I0111 16:04:02.863617   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend", UID:"83aa3d91-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2166", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lzvxl
W0111 16:04:03.510] I0111 16:04:02.866167   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend", UID:"83aa3d91-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2166", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bp9mh
W0111 16:04:03.511] I0111 16:04:02.866486   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend", UID:"83aa3d91-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2166", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-656q9
I0111 16:04:03.611] Successful describe rs:
I0111 16:04:03.612] Name:         frontend
I0111 16:04:03.612] Namespace:    namespace-1547222641-22113
I0111 16:04:03.612] Selector:     app=guestbook,tier=frontend
I0111 16:04:03.612] Labels:       app=guestbook
I0111 16:04:03.612]               tier=frontend
I0111 16:04:03.612] Annotations:  <none>
I0111 16:04:03.612] Replicas:     3 current / 3 desired
I0111 16:04:03.613] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:03.613] Pod Template:
I0111 16:04:03.613]   Labels:  app=guestbook
I0111 16:04:03.613]            tier=frontend
I0111 16:04:03.613]   Containers:
I0111 16:04:03.613]    php-redis:
I0111 16:04:03.613]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 16:04:03.661] Namespace:    namespace-1547222641-22113
I0111 16:04:03.661] Selector:     app=guestbook,tier=frontend
I0111 16:04:03.661] Labels:       app=guestbook
I0111 16:04:03.661]               tier=frontend
I0111 16:04:03.661] Annotations:  <none>
I0111 16:04:03.661] Replicas:     3 current / 3 desired
I0111 16:04:03.661] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:03.662] Pod Template:
I0111 16:04:03.662]   Labels:  app=guestbook
I0111 16:04:03.662]            tier=frontend
I0111 16:04:03.662]   Containers:
I0111 16:04:03.662]    php-redis:
I0111 16:04:03.662]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 16:04:03.770] Namespace:    namespace-1547222641-22113
I0111 16:04:03.771] Selector:     app=guestbook,tier=frontend
I0111 16:04:03.771] Labels:       app=guestbook
I0111 16:04:03.771]               tier=frontend
I0111 16:04:03.771] Annotations:  <none>
I0111 16:04:03.771] Replicas:     3 current / 3 desired
I0111 16:04:03.771] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:03.771] Pod Template:
I0111 16:04:03.771]   Labels:  app=guestbook
I0111 16:04:03.771]            tier=frontend
I0111 16:04:03.771]   Containers:
I0111 16:04:03.772]    php-redis:
I0111 16:04:03.772]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0111 16:04:03.871] Namespace:    namespace-1547222641-22113
I0111 16:04:03.871] Selector:     app=guestbook,tier=frontend
I0111 16:04:03.872] Labels:       app=guestbook
I0111 16:04:03.872]               tier=frontend
I0111 16:04:03.872] Annotations:  <none>
I0111 16:04:03.872] Replicas:     3 current / 3 desired
I0111 16:04:03.872] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:03.872] Pod Template:
I0111 16:04:03.872]   Labels:  app=guestbook
I0111 16:04:03.872]            tier=frontend
I0111 16:04:03.872]   Containers:
I0111 16:04:03.873]    php-redis:
I0111 16:04:03.873]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0111 16:04:09.221] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 16:04:09.309] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 16:04:09.379] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 16:04:09.480] I0111 16:04:08.792524   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend", UID:"8732c2a2-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6shtp
W0111 16:04:09.481] I0111 16:04:08.795103   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend", UID:"8732c2a2-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qvb5f
W0111 16:04:09.481] I0111 16:04:08.795226   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547222641-22113", Name:"frontend", UID:"8732c2a2-15ba-11e9-8e70-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7h2vt
W0111 16:04:09.481] Error: required flag(s) "max" not set
W0111 16:04:09.481] 
W0111 16:04:09.481] 
W0111 16:04:09.481] Examples:
W0111 16:04:09.482]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 16:04:09.482]   kubectl autoscale deployment foo --min=2 --max=10
W0111 16:04:09.482]   
... skipping 88 lines ...
I0111 16:04:12.542] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 16:04:12.631] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 16:04:12.742] (Bstatefulset.apps/nginx rolled back
I0111 16:04:12.837] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 16:04:12.932] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 16:04:13.034] (BSuccessful
I0111 16:04:13.035] message:error: unable to find specified revision 1000000 in history
I0111 16:04:13.035] has:unable to find specified revision
I0111 16:04:13.136] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 16:04:13.228] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 16:04:13.328] (Bstatefulset.apps/nginx rolled back
I0111 16:04:13.420] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0111 16:04:13.508] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0111 16:04:15.349] Name:         mock
I0111 16:04:15.349] Namespace:    namespace-1547222654-1154
I0111 16:04:15.349] Selector:     app=mock
I0111 16:04:15.349] Labels:       app=mock
I0111 16:04:15.349] Annotations:  <none>
I0111 16:04:15.349] Replicas:     1 current / 1 desired
I0111 16:04:15.349] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:15.349] Pod Template:
I0111 16:04:15.349]   Labels:  app=mock
I0111 16:04:15.349]   Containers:
I0111 16:04:15.349]    mock-container:
I0111 16:04:15.350]     Image:        k8s.gcr.io/pause:2.0
I0111 16:04:15.350]     Port:         9949/TCP
... skipping 56 lines ...
I0111 16:04:17.597] Name:         mock
I0111 16:04:17.597] Namespace:    namespace-1547222654-1154
I0111 16:04:17.597] Selector:     app=mock
I0111 16:04:17.597] Labels:       app=mock
I0111 16:04:17.597] Annotations:  <none>
I0111 16:04:17.597] Replicas:     1 current / 1 desired
I0111 16:04:17.597] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:17.597] Pod Template:
I0111 16:04:17.597]   Labels:  app=mock
I0111 16:04:17.597]   Containers:
I0111 16:04:17.598]    mock-container:
I0111 16:04:17.598]     Image:        k8s.gcr.io/pause:2.0
I0111 16:04:17.598]     Port:         9949/TCP
... skipping 56 lines ...
I0111 16:04:19.831] Name:         mock
I0111 16:04:19.831] Namespace:    namespace-1547222654-1154
I0111 16:04:19.831] Selector:     app=mock
I0111 16:04:19.831] Labels:       app=mock
I0111 16:04:19.831] Annotations:  <none>
I0111 16:04:19.831] Replicas:     1 current / 1 desired
I0111 16:04:19.831] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:19.831] Pod Template:
I0111 16:04:19.831]   Labels:  app=mock
I0111 16:04:19.831]   Containers:
I0111 16:04:19.832]    mock-container:
I0111 16:04:19.832]     Image:        k8s.gcr.io/pause:2.0
I0111 16:04:19.832]     Port:         9949/TCP
... skipping 42 lines ...
I0111 16:04:22.079] Namespace:    namespace-1547222654-1154
I0111 16:04:22.079] Selector:     app=mock
I0111 16:04:22.079] Labels:       app=mock
I0111 16:04:22.079]               status=replaced
I0111 16:04:22.079] Annotations:  <none>
I0111 16:04:22.079] Replicas:     1 current / 1 desired
I0111 16:04:22.079] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:22.080] Pod Template:
I0111 16:04:22.080]   Labels:  app=mock
I0111 16:04:22.080]   Containers:
I0111 16:04:22.080]    mock-container:
I0111 16:04:22.080]     Image:        k8s.gcr.io/pause:2.0
I0111 16:04:22.080]     Port:         9949/TCP
... skipping 11 lines ...
I0111 16:04:22.082] Namespace:    namespace-1547222654-1154
I0111 16:04:22.082] Selector:     app=mock2
I0111 16:04:22.082] Labels:       app=mock2
I0111 16:04:22.082]               status=replaced
I0111 16:04:22.082] Annotations:  <none>
I0111 16:04:22.082] Replicas:     1 current / 1 desired
I0111 16:04:22.082] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 16:04:22.082] Pod Template:
I0111 16:04:22.082]   Labels:  app=mock2
I0111 16:04:22.083]   Containers:
I0111 16:04:22.083]    mock-container:
I0111 16:04:22.083]     Image:        k8s.gcr.io/pause:2.0
I0111 16:04:22.083]     Port:         9949/TCP
... skipping 107 lines ...
I0111 16:04:27.020] +++ [0111 16:04:27] Testing persistent volumes
I0111 16:04:27.107] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 16:04:27.257] (Bpersistentvolume/pv0001 created
I0111 16:04:27.354] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0111 16:04:27.430] (Bpersistentvolume "pv0001" deleted
W0111 16:04:27.531] I0111 16:04:26.167394   55966 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547222654-1154", Name:"mock", UID:"918e2ce7-15ba-11e9-8e70-0242ac110002", APIVersion:"v1", ResourceVersion:"2627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-2dnzl
W0111 16:04:27.531] E0111 16:04:27.263217   55966 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I0111 16:04:27.632] persistentvolume/pv0002 created
I0111 16:04:27.669] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0111 16:04:27.741] (Bpersistentvolume "pv0002" deleted
I0111 16:04:27.898] persistentvolume/pv0003 created
I0111 16:04:27.993] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0111 16:04:28.070] (Bpersistentvolume "pv0003" deleted
... skipping 470 lines ...
I0111 16:04:33.813] yes
I0111 16:04:33.814] has:the server doesn't have a resource type
I0111 16:04:33.884] Successful
I0111 16:04:33.884] message:yes
I0111 16:04:33.884] has:yes
I0111 16:04:33.956] Successful
I0111 16:04:33.956] message:error: --subresource can not be used with NonResourceURL
I0111 16:04:33.956] has:subresource can not be used with NonResourceURL
I0111 16:04:34.029] Successful
I0111 16:04:34.110] Successful
I0111 16:04:34.110] message:yes
I0111 16:04:34.110] 0
I0111 16:04:34.110] has:0
... skipping 6 lines ...
I0111 16:04:34.300] role.rbac.authorization.k8s.io/testing-R reconciled
I0111 16:04:34.388] legacy-script.sh:737: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0111 16:04:34.479] (Blegacy-script.sh:738: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0111 16:04:34.565] (Blegacy-script.sh:739: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0111 16:04:34.662] (Blegacy-script.sh:740: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0111 16:04:34.740] (BSuccessful
I0111 16:04:34.741] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0111 16:04:34.741] has:only rbac.authorization.k8s.io/v1 is supported
I0111 16:04:34.834] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0111 16:04:34.839] role.rbac.authorization.k8s.io "testing-R" deleted
I0111 16:04:34.849] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0111 16:04:34.858] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0111 16:04:34.866] Recording: run_retrieve_multiple_tests
... skipping 1021 lines ...
I0111 16:05:04.112] message:node/127.0.0.1 already uncordoned (dry run)
I0111 16:05:04.112] has:already uncordoned
I0111 16:05:04.208] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0111 16:05:04.293] (Bnode/127.0.0.1 labeled
I0111 16:05:04.398] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0111 16:05:04.473] (BSuccessful
I0111 16:05:04.474] message:error: cannot specify both a node name and a --selector option
I0111 16:05:04.474] See 'kubectl drain -h' for help and examples
I0111 16:05:04.474] has:cannot specify both a node name
I0111 16:05:04.547] Successful
I0111 16:05:04.547] message:error: USAGE: cordon NODE [flags]
I0111 16:05:04.547] See 'kubectl cordon -h' for help and examples
I0111 16:05:04.547] has:error\: USAGE\: cordon NODE
I0111 16:05:04.627] node/127.0.0.1 already uncordoned
I0111 16:05:04.705] Successful
I0111 16:05:04.706] message:error: You must provide one or more resources by argument or filename.
I0111 16:05:04.706] Example resource specifications include:
I0111 16:05:04.706]    '-f rsrc.yaml'
I0111 16:05:04.706]    '--filename=rsrc.json'
I0111 16:05:04.706]    '<resource> <name>'
I0111 16:05:04.706]    '<resource>'
I0111 16:05:04.706] has:must provide one or more resources
... skipping 15 lines ...
I0111 16:05:05.858] Successful
I0111 16:05:05.859] message:The following kubectl-compatible plugins are available:
I0111 16:05:05.859] 
I0111 16:05:05.859] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0111 16:05:05.859]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0111 16:05:05.859] 
I0111 16:05:05.859] error: one plugin warning was found
I0111 16:05:05.859] has:kubectl-version overwrites existing command: "kubectl version"
I0111 16:05:05.934] Successful
I0111 16:05:05.934] message:The following kubectl-compatible plugins are available:
I0111 16:05:05.935] 
I0111 16:05:05.935] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 16:05:05.935] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0111 16:05:05.935]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 16:05:05.935] 
I0111 16:05:05.935] error: one plugin warning was found
I0111 16:05:05.935] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0111 16:05:06.008] Successful
I0111 16:05:06.008] message:The following kubectl-compatible plugins are available:
I0111 16:05:06.008] 
I0111 16:05:06.008] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 16:05:06.008] has:plugins are available
I0111 16:05:06.082] Successful
I0111 16:05:06.082] message:
I0111 16:05:06.083] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I0111 16:05:06.083] error: unable to find any kubectl plugins in your PATH
I0111 16:05:06.083] has:unable to find any kubectl plugins in your PATH
I0111 16:05:06.153] Successful
I0111 16:05:06.153] message:I am plugin foo
I0111 16:05:06.154] has:plugin foo
I0111 16:05:06.227] Successful
I0111 16:05:06.227] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1639+33a9c6e892f69e", GitCommit:"33a9c6e892f69e20be9527ba00bd33dfa5de221b", GitTreeState:"clean", BuildDate:"2019-01-11T15:57:56Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0111 16:05:06.698] 
I0111 16:05:06.701] +++ Running case: test-cmd.run_impersonation_tests 
I0111 16:05:06.704] +++ working dir: /go/src/k8s.io/kubernetes
I0111 16:05:06.707] +++ command: run_impersonation_tests
I0111 16:05:06.717] +++ [0111 16:05:06] Testing impersonation
I0111 16:05:06.784] Successful
I0111 16:05:06.784] message:error: requesting groups or user-extra for  without impersonating a user
I0111 16:05:06.784] has:without impersonating a user
I0111 16:05:06.937] certificatesigningrequest.certificates.k8s.io/foo created
I0111 16:05:07.038] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0111 16:05:07.126] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0111 16:05:07.208] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0111 16:05:07.385] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 20 lines ...
W0111 16:05:07.877] I0111 16:05:07.874796   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.877] I0111 16:05:07.874811   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.878] I0111 16:05:07.874820   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.878] I0111 16:05:07.874821   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.878] I0111 16:05:07.874841   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.878] I0111 16:05:07.874857   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.878] W0111 16:05:07.874912   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.879] I0111 16:05:07.874912   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.879] I0111 16:05:07.874948   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.879] I0111 16:05:07.874991   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.879] I0111 16:05:07.875000   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.879] I0111 16:05:07.875143   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.880] I0111 16:05:07.875153   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 3 lines ...
W0111 16:05:07.881] I0111 16:05:07.875710   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.881] I0111 16:05:07.879336   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.881] I0111 16:05:07.879360   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.881] I0111 16:05:07.879896   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.881] I0111 16:05:07.875987   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.882] I0111 16:05:07.879931   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.882] W0111 16:05:07.876016   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.882] W0111 16:05:07.876015   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.882] I0111 16:05:07.876179   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.883] I0111 16:05:07.880009   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.883] W0111 16:05:07.876319   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.883] I0111 16:05:07.876751   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.883] I0111 16:05:07.880040   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.883] I0111 16:05:07.876774   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.884] I0111 16:05:07.880062   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.884] I0111 16:05:07.876800   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.884] I0111 16:05:07.880079   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.884] W0111 16:05:07.876862   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.885] I0111 16:05:07.876877   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.885] I0111 16:05:07.880105   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.885] I0111 16:05:07.876936   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.885] I0111 16:05:07.880128   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.885] I0111 16:05:07.876955   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.886] I0111 16:05:07.880144   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.886] I0111 16:05:07.876988   52622 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0111 16:05:07.886] I0111 16:05:07.877055   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.886] I0111 16:05:07.880172   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.886] W0111 16:05:07.877106   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.887] I0111 16:05:07.877109   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.887] I0111 16:05:07.880199   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.887] W0111 16:05:07.877109   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.887] W0111 16:05:07.877121   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.888] I0111 16:05:07.877146   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.888] I0111 16:05:07.880309   52622 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 16:05:07.888] W0111 16:05:07.877154   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.888] W0111 16:05:07.877163   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.889] W0111 16:05:07.877229   52622 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 16:05:07.889] I0111 16:05:07.877293   52622 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0111 16:05:07.889] E0111 16:05:07.877311   52622 controller.go:172] Get https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:6443: connect: connection refused
W0111 16:05:07.947] + make test-integration
I0111 16:05:08.048] No resources found
I0111 16:05:08.048] pod "test-pod-1" force deleted
I0111 16:05:08.048] +++ [0111 16:05:07] TESTS PASSED
... skipping 49 lines ...
I0111 16:15:45.201] [restful] 2019/01/11 16:08:13 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:41159/swaggerapi
I0111 16:15:45.201] [restful] 2019/01/11 16:08:13 log.go:33: [restful/swagger] https://127.0.0.1:41159/swaggerui/ is mapped to folder /swagger-ui/
I0111 16:15:45.201] [restful] 2019/01/11 16:08:21 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:42455/swaggerapi
I0111 16:15:45.201] [restful] 2019/01/11 16:08:21 log.go:33: [restful/swagger] https://127.0.0.1:42455/swaggerui/ is mapped to folder /swagger-ui/
I0111 16:15:45.202] [restful] 2019/01/11 16:08:24 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:42455/swaggerapi
I0111 16:15:45.202] [restful] 2019/01/11 16:08:24 log.go:33: [restful/swagger] https://127.0.0.1:42455/swaggerui/ is mapped to folder /swagger-ui/
I0111 16:15:45.202] FAIL	k8s.io/kubernetes/test/integration/client	103.764s
I0111 16:15:45.202] ok  	k8s.io/kubernetes/test/integration/configmap	6.972s
I0111 16:15:45.202] ok  	k8s.io/kubernetes/test/integration/cronjob	17.897s
I0111 16:15:45.202] ok  	k8s.io/kubernetes/test/integration/daemonset	531.689s
I0111 16:15:45.203] ok  	k8s.io/kubernetes/test/integration/defaulttolerationseconds	5.846s
I0111 16:15:45.203] ok  	k8s.io/kubernetes/test/integration/deployment	208.752s
I0111 16:15:45.203] [restful] 2019/01/11 16:07:23 log.go:33: [restful/swagger] listing is available at https://172.17.0.2:41191/swaggerapi
... skipping 191 lines ...
I0111 16:17:44.906] [restful] 2019/01/11 16:11:26 log.go:33: [restful/swagger] https://127.0.0.1:34185/swaggerui/ is mapped to folder /swagger-ui/
I0111 16:17:44.906] ok  	k8s.io/kubernetes/test/integration/tls	18.045s
I0111 16:17:44.906] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.931s
I0111 16:17:44.906] ok  	k8s.io/kubernetes/test/integration/volume	93.888s
I0111 16:17:44.906] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	147.531s
I0111 16:17:58.443] +++ [0111 16:17:58] Saved JUnit XML test report to /workspace/artifacts/junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-160518.xml
I0111 16:17:58.447] Makefile:184: recipe for target 'test' failed
I0111 16:17:58.458] +++ [0111 16:17:58] Cleaning up etcd
W0111 16:17:58.559] make[1]: *** [test] Error 1
W0111 16:17:58.559] !!! [0111 16:17:58] Call tree:
W0111 16:17:58.559] !!! [0111 16:17:58]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0111 16:17:58.716] +++ [0111 16:17:58] Integration test cleanup complete
I0111 16:17:58.716] Makefile:203: recipe for target 'test-integration' failed
W0111 16:17:58.817] make: *** [test-integration] Error 1
W0111 16:18:01.609] Traceback (most recent call last):
W0111 16:18:01.609]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0111 16:18:01.625]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0111 16:18:01.625]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0111 16:18:01.626]     check(*cmd)
W0111 16:18:01.626]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0111 16:18:01.626]     subprocess.check_call(cmd)
W0111 16:18:01.626]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0111 16:18:01.664]     raise CalledProcessError(retcode, cmd)
W0111 16:18:01.665] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0111 16:18:01.670] Command failed
I0111 16:18:01.670] process 518 exited with code 1 after 27.7m
E0111 16:18:01.671] FAIL: ci-kubernetes-integration-master
I0111 16:18:01.671] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0111 16:18:02.620] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0111 16:18:02.684] process 123607 exited with code 0 after 0.0m
I0111 16:18:02.684] Call:  gcloud config get-value account
I0111 16:18:03.089] process 123619 exited with code 0 after 0.0m
I0111 16:18:03.089] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 16:18:03.089] Upload result and artifacts...
I0111 16:18:03.090] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/8009
I0111 16:18:03.090] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/8009/artifacts
W0111 16:18:04.276] CommandException: One or more URLs matched no objects.
E0111 16:18:04.442] Command failed
I0111 16:18:04.442] process 123631 exited with code 1 after 0.0m
W0111 16:18:04.442] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/8009/artifacts not exist yet
I0111 16:18:04.443] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/8009/artifacts
I0111 16:18:09.529] process 123773 exited with code 0 after 0.1m
W0111 16:18:09.529] metadata path /workspace/_artifacts/metadata.json does not exist
W0111 16:18:09.530] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...