PRdanielqsj: Fix typos like limitting
ResultFAILURE
Tests 1 failed / 578 succeeded
Started2018-12-07 06:16
Elapsed26m5s
Versionv1.14.0-alpha.0.901+5d76949082d149
Buildergke-prow-default-pool-3c8994a8-nbb0
Refs master:cd5f41ec
71684:3c055aa4
pod83a39245-f9e7-11e8-aa74-0a580a6c01e0
infra-commitd6f7bb8bf
pod83a39245-f9e7-11e8-aa74-0a580a6c01e0
repok8s.io/kubernetes
repo-commit5d76949082d14918dea6d2bae668bb58512a4408
repos{u'k8s.io/kubernetes': u'master:cd5f41ec1ad45c831df38d986a682e5580eaead7,71684:3c055aa4b47232bf7d6b5d5a0901dae239e33c59'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver Test202StatusCode 3.68s

go test -v k8s.io/kubernetes/test/integration/apiserver -run Test202StatusCode$
I1207 06:30:06.089698  115804 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I1207 06:30:06.089732  115804 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I1207 06:30:06.089744  115804 master.go:228] Using reconciler: 
I1207 06:30:06.091274  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.091293  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.091358  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.091405  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.091785  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.093746  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.093775  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.093809  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.093860  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.094349  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.094372  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.094407  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.094425  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.094514  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.095033  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.095104  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.095119  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.095155  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.095187  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.096521  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.097040  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.097056  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.097097  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.097135  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.097870  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.097910  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.097960  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.098078  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.098293  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.099473  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.099502  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.099533  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.099622  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.099848  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.100709  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.100947  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.100971  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.101014  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.101062  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.126657  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.127046  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.127069  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.127132  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.127221  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.130504  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.132022  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.132052  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.132158  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.132255  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.132789  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.134568  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.134600  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.134690  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.134767  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.140164  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.140620  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.140643  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.140672  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.140871  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.141791  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.141816  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.141852  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.141958  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.142004  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.142319  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.142732  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.142749  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.142781  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.142834  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.143290  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.143491  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.143508  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.143536  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.143647  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.143903  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.143980  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.143996  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.144029  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.144067  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.144294  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.144467  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.144482  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.144510  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.144575  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.144830  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.162625  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.162654  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.162709  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.162775  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.163972  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.163999  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.164053  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.164106  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.164410  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.165020  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.165045  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.165075  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.165182  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.165412  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.170682  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.170719  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.170771  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.171450  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.174261  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.174311  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.175177  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.175193  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.175252  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.175731  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.176863  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.182783  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.186014  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.187366  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.187511  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.189496  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.191859  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.191881  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.191945  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.192737  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.193491  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.193829  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.201069  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.201156  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.201256  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.204494  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.204991  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.205014  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.205052  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.205102  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.206255  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.206386  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.206404  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.206441  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.206569  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.208884  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.212109  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.212170  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.212253  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.212336  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.212784  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.213344  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.213362  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.213392  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.213436  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.213659  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.214314  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.214356  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.214396  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.214471  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.214681  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.215227  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.215254  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.215283  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.215326  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.215679  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.216201  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.216236  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.216276  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.216320  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.216952  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.216967  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.216996  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.217066  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.217314  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.217966  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.217982  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.218053  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.218173  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.218460  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.218906  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.218934  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.218970  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.219050  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.219329  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.220981  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.221463  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.221478  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.221508  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.221783  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.222384  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.222461  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.222497  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.222581  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.222771  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.223425  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.223498  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.223519  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.223553  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.223713  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.224203  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.224240  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.224266  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.224314  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.224563  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.224814  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.225088  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.225102  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.225130  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.225165  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.226797  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.226803  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.226819  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.226852  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.228282  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.228741  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.229102  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.229118  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.229146  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.229389  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.229753  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.232887  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.232919  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.232952  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.233004  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.233746  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.233764  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.233794  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.233863  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.234121  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.234706  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.234736  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.234763  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.234826  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.235048  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.235483  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.235781  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.235802  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.235878  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.235927  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.236291  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.236320  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.236353  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.236426  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.236583  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.236847  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.237228  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.237249  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.237280  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.237349  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.238855  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.238875  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.238914  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.238932  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.238988  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.250280  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.250493  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.250521  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.250569  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.250765  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.251630  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.251840  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.251862  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.251926  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.252282  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.252914  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.253223  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.253290  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.253374  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.253676  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.254474  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.255188  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.255232  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.255280  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.255690  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.256535  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.268494  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.268527  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.268599  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.268688  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.269928  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.270567  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.270885  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.271020  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.271139  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.272055  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.275603  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.275637  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.281759  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.281931  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.283823  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.283850  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.283919  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.283992  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.285330  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.285738  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.286457  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.286485  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.286519  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.286570  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.288039  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.288061  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.288095  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.288137  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.288483  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.289150  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.289166  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.289199  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.289295  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.289524  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.299369  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.299455  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.299532  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.299704  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.300110  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.302991  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.303018  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.303058  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.303174  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.303436  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.304823  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:06.304848  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:06.304920  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:06.305010  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.305270  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:06.307269  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:30:06.312559  115804 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W1207 06:30:06.353983  115804 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1207 06:30:06.359069  115804 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1207 06:30:06.366852  115804 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1207 06:30:06.388107  115804 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I1207 06:30:07.089610  115804 clientconn.go:551] parsed scheme: ""
I1207 06:30:07.089647  115804 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 06:30:07.089699  115804 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 06:30:07.089765  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:07.090525  115804 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 06:30:07.408495  115804 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I1207 06:30:07.412163  115804 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I1207 06:30:07.412183  115804 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I1207 06:30:07.425636  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1207 06:30:07.437501  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1207 06:30:07.440875  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1207 06:30:07.443857  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I1207 06:30:07.452917  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I1207 06:30:07.455997  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I1207 06:30:07.459311  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1207 06:30:07.462652  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1207 06:30:07.465498  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1207 06:30:07.469037  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1207 06:30:07.472398  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I1207 06:30:07.475445  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1207 06:30:07.478251  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1207 06:30:07.487460  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1207 06:30:07.490627  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1207 06:30:07.495680  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1207 06:30:07.498845  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1207 06:30:07.501568  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 06:30:07.504232  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1207 06:30:07.506741  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1207 06:30:07.509534  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1207 06:30:07.512123  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1207 06:30:07.514549  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 06:30:07.517942  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1207 06:30:07.520497  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1207 06:30:07.522825  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1207 06:30:07.525325  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1207 06:30:07.533508  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 06:30:07.537909  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 06:30:07.541355  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 06:30:07.557914  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 06:30:07.570327  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 06:30:07.575588  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 06:30:07.578392  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 06:30:07.581489  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 06:30:07.584581  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 06:30:07.587242  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 06:30:07.590826  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1207 06:30:07.593432  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1207 06:30:07.600806  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1207 06:30:07.604111  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 06:30:07.607933  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 06:30:07.611189  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 06:30:07.614149  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 06:30:07.617094  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 06:30:07.619715  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1207 06:30:07.624651  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 06:30:07.630877  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1207 06:30:07.634006  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 06:30:07.639231  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 06:30:07.645185  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 06:30:07.647977  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 06:30:07.674001  115804 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 06:30:07.714956  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1207 06:30:07.754104  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1207 06:30:07.797917  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1207 06:30:07.834080  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1207 06:30:07.874186  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 06:30:07.914103  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1207 06:30:07.954276  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1207 06:30:07.993867  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 06:30:08.034920  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1207 06:30:08.074203  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1207 06:30:08.118079  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 06:30:08.154492  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 06:30:08.194503  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 06:30:08.233773  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 06:30:08.274160  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 06:30:08.314187  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 06:30:08.354517  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 06:30:08.394768  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 06:30:08.434449  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 06:30:08.474120  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 06:30:08.514188  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1207 06:30:08.553968  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1207 06:30:08.594588  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1207 06:30:08.634277  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 06:30:08.685115  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 06:30:08.716186  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 06:30:08.753855  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 06:30:08.794446  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 06:30:08.834392  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1207 06:30:08.874477  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 06:30:08.914294  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1207 06:30:08.954067  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 06:30:08.995361  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 06:30:09.034387  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 06:30:09.074188  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 06:30:09.114019  115804 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 06:30:09.155498  115804 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1207 06:30:09.193682  115804 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 06:30:09.233849  115804 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 06:30:09.274352  115804 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1207 06:30:09.319521  115804 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 06:30:09.354161  115804 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1207 06:30:09.394034  115804 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1207 06:30:09.435128  115804 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 06:30:09.474185  115804 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1207 06:30:09.514029  115804 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 06:30:09.554265  115804 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 06:30:09.594507  115804 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1207 06:30:09.634106  115804 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1207 06:30:09.766554  115804 controller.go:170] Shutting down kubernetes service endpoint reconciler
				from junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-062908.xml

Filter through log files | View test history on testgrid


Show 578 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1207 06:16:02.242] process 203 exited with code 0 after 0.0m
I1207 06:16:02.243] Call:  gcloud config get-value account
I1207 06:16:02.507] process 216 exited with code 0 after 0.0m
I1207 06:16:02.507] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 06:16:02.507] Call:  kubectl get -oyaml pods/83a39245-f9e7-11e8-aa74-0a580a6c01e0
W1207 06:16:02.840] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1207 06:16:02.844] Command failed
I1207 06:16:02.844] process 229 exited with code 1 after 0.0m
E1207 06:16:02.844] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/83a39245-f9e7-11e8-aa74-0a580a6c01e0']' returned non-zero exit status 1
I1207 06:16:02.844] Root: /workspace
I1207 06:16:02.844] cd to /workspace
I1207 06:16:02.845] Checkout: /workspace/k8s.io/kubernetes master:cd5f41ec1ad45c831df38d986a682e5580eaead7,71684:3c055aa4b47232bf7d6b5d5a0901dae239e33c59 to /workspace/k8s.io/kubernetes
I1207 06:16:02.845] Call:  git init k8s.io/kubernetes
... skipping 882 lines ...
W1207 06:24:14.187] I1207 06:24:14.185685   55703 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
W1207 06:24:14.187] I1207 06:24:14.185766   55703 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
W1207 06:24:14.187] I1207 06:24:14.185817   55703 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
W1207 06:24:14.188] I1207 06:24:14.185857   55703 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
W1207 06:24:14.188] I1207 06:24:14.185960   55703 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
W1207 06:24:14.188] I1207 06:24:14.186012   55703 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
W1207 06:24:14.188] E1207 06:24:14.186097   55703 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 06:24:14.189] I1207 06:24:14.186124   55703 controllermanager.go:516] Started "resourcequota"
W1207 06:24:14.189] I1207 06:24:14.186131   55703 core.go:151] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W1207 06:24:14.189] W1207 06:24:14.186148   55703 controllermanager.go:508] Skipping "route"
W1207 06:24:14.189] W1207 06:24:14.186156   55703 controllermanager.go:508] Skipping "ttl-after-finished"
W1207 06:24:14.189] I1207 06:24:14.186868   55703 resource_quota_controller.go:276] Starting resource quota controller
W1207 06:24:14.190] I1207 06:24:14.187351   55703 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W1207 06:24:14.190] I1207 06:24:14.187520   55703 resource_quota_monitor.go:301] QuotaMonitor running
W1207 06:24:14.190] I1207 06:24:14.187726   55703 controllermanager.go:516] Started "daemonset"
W1207 06:24:14.190] I1207 06:24:14.188868   55703 controllermanager.go:516] Started "csrapproving"
W1207 06:24:14.190] W1207 06:24:14.189356   55703 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W1207 06:24:14.191] I1207 06:24:14.189762   55703 certificate_controller.go:113] Starting certificate controller
W1207 06:24:14.191] I1207 06:24:14.189881   55703 controller_utils.go:1027] Waiting for caches to sync for certificate controller
W1207 06:24:14.191] I1207 06:24:14.189760   55703 daemon_controller.go:268] Starting daemon sets controller
W1207 06:24:14.191] I1207 06:24:14.190041   55703 controller_utils.go:1027] Waiting for caches to sync for daemon sets controller
W1207 06:24:14.191] I1207 06:24:14.190355   55703 garbagecollector.go:133] Starting garbage collector controller
W1207 06:24:14.191] I1207 06:24:14.190393   55703 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 06:24:14.192] I1207 06:24:14.190426   55703 controllermanager.go:516] Started "garbagecollector"
W1207 06:24:14.192] I1207 06:24:14.190528   55703 graph_builder.go:308] GraphBuilder running
W1207 06:24:14.192] I1207 06:24:14.190830   55703 controllermanager.go:516] Started "csrcleaner"
W1207 06:24:14.192] I1207 06:24:14.190865   55703 cleaner.go:81] Starting CSR cleaner controller
W1207 06:24:14.194] E1207 06:24:14.193799   55703 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1207 06:24:14.194] W1207 06:24:14.193829   55703 controllermanager.go:508] Skipping "service"
W1207 06:24:14.194] W1207 06:24:14.193842   55703 controllermanager.go:508] Skipping "root-ca-cert-publisher"
W1207 06:24:14.195] I1207 06:24:14.194756   55703 controllermanager.go:516] Started "replicationcontroller"
W1207 06:24:14.196] I1207 06:24:14.194997   55703 replica_set.go:182] Starting replicationcontroller controller
W1207 06:24:14.196] I1207 06:24:14.195977   55703 controller_utils.go:1027] Waiting for caches to sync for ReplicationController controller
W1207 06:24:14.198] I1207 06:24:14.197834   55703 controllermanager.go:516] Started "serviceaccount"
... skipping 16 lines ...
W1207 06:24:14.278] I1207 06:24:14.278464   55703 controller_utils.go:1034] Caches are synced for GC controller
W1207 06:24:14.290] I1207 06:24:14.290248   55703 controller_utils.go:1034] Caches are synced for daemon sets controller
W1207 06:24:14.291] I1207 06:24:14.290249   55703 controller_utils.go:1034] Caches are synced for certificate controller
W1207 06:24:14.296] I1207 06:24:14.296312   55703 controller_utils.go:1034] Caches are synced for ReplicationController controller
W1207 06:24:14.299] I1207 06:24:14.298981   55703 controller_utils.go:1034] Caches are synced for service account controller
W1207 06:24:14.301] I1207 06:24:14.301144   52326 controller.go:608] quota admission added evaluator for: serviceaccounts
W1207 06:24:14.351] W1207 06:24:14.351202   55703 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1207 06:24:14.370] I1207 06:24:14.369686   55703 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W1207 06:24:14.376] E1207 06:24:14.376437   55703 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I1207 06:24:14.477] +++ [1207 06:24:14] On try 3, controller-manager: ok
I1207 06:24:14.477] node/127.0.0.1 created
I1207 06:24:14.477] +++ [1207 06:24:14] Checking kubectl version
I1207 06:24:14.477] Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.901+5d76949082d149", GitCommit:"5d76949082d14918dea6d2bae668bb58512a4408", GitTreeState:"clean", BuildDate:"2018-12-07T06:22:28Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
I1207 06:24:14.478] Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.901+5d76949082d149", GitCommit:"5d76949082d14918dea6d2bae668bb58512a4408", GitTreeState:"clean", BuildDate:"2018-12-07T06:22:45Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
W1207 06:24:14.578] I1207 06:24:14.477716   55703 controller_utils.go:1034] Caches are synced for disruption controller
... skipping 31 lines ...
I1207 06:24:15.458] Successful: --client --output json has no server info
I1207 06:24:15.461] +++ [1207 06:24:15] Testing kubectl version: compare json output using additional --short flag
I1207 06:24:15.597] Successful: --short --output client json info is equal to non short result
I1207 06:24:15.604] Successful: --short --output server json info is equal to non short result
I1207 06:24:15.607] +++ [1207 06:24:15] Testing kubectl version: compare json output with yaml output
W1207 06:24:15.708] I1207 06:24:15.654470   55703 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 06:24:15.708] E1207 06:24:15.680908   55703 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 06:24:15.708] I1207 06:24:15.690565   55703 controller_utils.go:1034] Caches are synced for garbage collector controller
W1207 06:24:15.709] I1207 06:24:15.690623   55703 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W1207 06:24:15.755] I1207 06:24:15.754847   55703 controller_utils.go:1034] Caches are synced for garbage collector controller
I1207 06:24:15.856] Successful: --output json/yaml has identical information
I1207 06:24:15.856] +++ exit code: 0
I1207 06:24:15.856] Recording: run_kubectl_config_set_tests
... skipping 42 lines ...
I1207 06:24:18.286] +++ working dir: /go/src/k8s.io/kubernetes
I1207 06:24:18.288] +++ command: run_RESTMapper_evaluation_tests
I1207 06:24:18.301] +++ [1207 06:24:18] Creating namespace namespace-1544163858-13475
I1207 06:24:18.369] namespace/namespace-1544163858-13475 created
I1207 06:24:18.437] Context "test" modified.
I1207 06:24:18.443] +++ [1207 06:24:18] Testing RESTMapper
I1207 06:24:18.551] +++ [1207 06:24:18] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1207 06:24:18.566] +++ exit code: 0
I1207 06:24:18.674] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1207 06:24:18.675] bindings                                                                      true         Binding
I1207 06:24:18.675] componentstatuses                 cs                                          false        ComponentStatus
I1207 06:24:18.675] configmaps                        cm                                          true         ConfigMap
I1207 06:24:18.675] endpoints                         ep                                          true         Endpoints
... skipping 609 lines ...
I1207 06:24:37.661] poddisruptionbudget.policy/test-pdb-3 created
I1207 06:24:37.753] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1207 06:24:37.822] poddisruptionbudget.policy/test-pdb-4 created
I1207 06:24:37.913] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1207 06:24:38.069] core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:24:38.239] pod/env-test-pod created
W1207 06:24:38.340] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1207 06:24:38.340] error: setting 'all' parameter but found a non empty selector. 
W1207 06:24:38.340] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 06:24:38.340] I1207 06:24:37.338484   52326 controller.go:608] quota admission added evaluator for: poddisruptionbudgets.policy
W1207 06:24:38.341] error: min-available and max-unavailable cannot be both specified
I1207 06:24:38.441] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I1207 06:24:38.441] Name:               env-test-pod
I1207 06:24:38.441] Namespace:          test-kubectl-describe-pod
I1207 06:24:38.441] Priority:           0
I1207 06:24:38.442] PriorityClassName:  <none>
I1207 06:24:38.442] Node:               <none>
... skipping 145 lines ...
W1207 06:24:49.999] I1207 06:24:49.134175   55703 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W1207 06:24:50.000] I1207 06:24:49.567246   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544163885-23300", Name:"modified", UID:"cca0dfc8-f9e8-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-wnplb
I1207 06:24:50.149] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:24:50.318] pod/valid-pod created
I1207 06:24:50.415] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 06:24:50.562] Successful
I1207 06:24:50.562] message:Error from server: cannot restore map from string
I1207 06:24:50.562] has:cannot restore map from string
I1207 06:24:50.645] Successful
I1207 06:24:50.645] message:pod/valid-pod patched (no change)
I1207 06:24:50.645] has:patched (no change)
I1207 06:24:50.726] pod/valid-pod patched
I1207 06:24:50.816] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I1207 06:24:51.319] pod/valid-pod patched
I1207 06:24:51.409] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1207 06:24:51.481] pod/valid-pod patched
I1207 06:24:51.574] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1207 06:24:51.732] pod/valid-pod patched
I1207 06:24:51.830] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 06:24:51.996] +++ [1207 06:24:51] "kubectl patch with resourceVersion 487" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W1207 06:24:52.096] E1207 06:24:50.553892   52326 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I1207 06:24:52.234] pod "valid-pod" deleted
I1207 06:24:52.245] pod/valid-pod replaced
I1207 06:24:52.343] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1207 06:24:52.498] Successful
I1207 06:24:52.498] message:error: --grace-period must have --force specified
I1207 06:24:52.498] has:\-\-grace-period must have \-\-force specified
I1207 06:24:52.666] Successful
I1207 06:24:52.666] message:error: --timeout must have --force specified
I1207 06:24:52.666] has:\-\-timeout must have \-\-force specified
W1207 06:24:52.815] W1207 06:24:52.814628   55703 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1207 06:24:52.917] node/node-v1-test created
I1207 06:24:52.971] node/node-v1-test replaced
I1207 06:24:53.068] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1207 06:24:53.143] node "node-v1-test" deleted
I1207 06:24:53.242] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 06:24:53.515] core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 58 lines ...
I1207 06:24:58.463] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:24:58.616] pod/test-pod created
W1207 06:24:58.717] Edit cancelled, no changes made.
W1207 06:24:58.717] Edit cancelled, no changes made.
W1207 06:24:58.717] Edit cancelled, no changes made.
W1207 06:24:58.717] Edit cancelled, no changes made.
W1207 06:24:58.717] error: 'name' already has a value (valid-pod), and --overwrite is false
W1207 06:24:58.718] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 06:24:58.718] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 06:24:58.818] pod "test-pod" deleted
I1207 06:24:58.818] +++ [1207 06:24:58] Creating namespace namespace-1544163898-30695
I1207 06:24:58.865] namespace/namespace-1544163898-30695 created
I1207 06:24:58.932] Context "test" modified.
... skipping 41 lines ...
I1207 06:25:01.978] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1207 06:25:01.980] +++ working dir: /go/src/k8s.io/kubernetes
I1207 06:25:01.982] +++ command: run_kubectl_create_error_tests
I1207 06:25:01.995] +++ [1207 06:25:01] Creating namespace namespace-1544163901-3325
I1207 06:25:02.064] namespace/namespace-1544163901-3325 created
I1207 06:25:02.133] Context "test" modified.
I1207 06:25:02.140] +++ [1207 06:25:02] Testing kubectl create with error
W1207 06:25:02.241] Error: required flag(s) "filename" not set
W1207 06:25:02.241] 
W1207 06:25:02.241] 
W1207 06:25:02.241] Examples:
W1207 06:25:02.241]   # Create a pod using the data in pod.json.
W1207 06:25:02.241]   kubectl create -f ./pod.json
W1207 06:25:02.241]   
... skipping 38 lines ...
W1207 06:25:02.245]   kubectl create -f FILENAME [options]
W1207 06:25:02.245] 
W1207 06:25:02.246] Use "kubectl <command> --help" for more information about a given command.
W1207 06:25:02.246] Use "kubectl options" for a list of global command-line options (applies to all commands).
W1207 06:25:02.246] 
W1207 06:25:02.246] required flag(s) "filename" not set
I1207 06:25:02.362] +++ [1207 06:25:02] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1207 06:25:02.462] kubectl convert is DEPRECATED and will be removed in a future version.
W1207 06:25:02.463] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 06:25:02.563] +++ exit code: 0
I1207 06:25:02.563] Recording: run_kubectl_apply_tests
I1207 06:25:02.563] Running command: run_kubectl_apply_tests
I1207 06:25:02.582] 
... skipping 17 lines ...
I1207 06:25:03.711] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I1207 06:25:04.536] deployment.extensions "test-deployment-retainkeys" deleted
I1207 06:25:04.632] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:04.788] pod/selector-test-pod created
I1207 06:25:04.885] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 06:25:04.969] Successful
I1207 06:25:04.969] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 06:25:04.970] has:pods "selector-test-pod-dont-apply" not found
I1207 06:25:05.046] pod "selector-test-pod" deleted
I1207 06:25:05.141] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:05.370] pod/test-pod created (server dry run)
I1207 06:25:05.467] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:05.614] pod/test-pod created
... skipping 8 lines ...
W1207 06:25:06.336] I1207 06:25:06.335717   52326 clientconn.go:551] parsed scheme: ""
W1207 06:25:06.336] I1207 06:25:06.335749   52326 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 06:25:06.337] I1207 06:25:06.335783   52326 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 06:25:06.337] I1207 06:25:06.335869   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:25:06.337] I1207 06:25:06.336287   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:25:06.414] I1207 06:25:06.413813   52326 controller.go:608] quota admission added evaluator for: resources.mygroup.example.com
W1207 06:25:06.502] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1207 06:25:06.603] kind.mygroup.example.com/myobj created (server dry run)
I1207 06:25:06.603] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1207 06:25:06.682] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:06.830] pod/a created
I1207 06:25:08.331] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I1207 06:25:08.416] Successful
I1207 06:25:08.416] message:Error from server (NotFound): pods "b" not found
I1207 06:25:08.416] has:pods "b" not found
I1207 06:25:08.567] pod/b created
I1207 06:25:08.580] pod/a pruned
I1207 06:25:10.268] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I1207 06:25:10.352] Successful
I1207 06:25:10.352] message:Error from server (NotFound): pods "a" not found
I1207 06:25:10.352] has:pods "a" not found
I1207 06:25:10.426] pod "b" deleted
I1207 06:25:10.517] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:10.669] pod/a created
I1207 06:25:10.761] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I1207 06:25:10.843] Successful
I1207 06:25:10.844] message:Error from server (NotFound): pods "b" not found
I1207 06:25:10.844] has:pods "b" not found
I1207 06:25:10.997] pod/b created
I1207 06:25:11.091] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I1207 06:25:11.173] apply.sh:166: Successful get pods b {{.metadata.name}}: b
I1207 06:25:11.245] pod "a" deleted
I1207 06:25:11.249] pod "b" deleted
I1207 06:25:11.411] Successful
I1207 06:25:11.411] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I1207 06:25:11.411] has:all resources selected for prune without explicitly passing --all
I1207 06:25:11.559] pod/a created
I1207 06:25:11.566] pod/b created
I1207 06:25:11.575] service/prune-svc created
I1207 06:25:13.080] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I1207 06:25:13.170] apply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 127 lines ...
I1207 06:25:25.555] Context "test" modified.
I1207 06:25:25.561] +++ [1207 06:25:25] Testing kubectl create filter
I1207 06:25:25.647] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:25.797] pod/selector-test-pod created
I1207 06:25:25.893] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 06:25:25.974] Successful
I1207 06:25:25.974] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 06:25:25.974] has:pods "selector-test-pod-dont-apply" not found
I1207 06:25:26.051] pod "selector-test-pod" deleted
I1207 06:25:26.071] +++ exit code: 0
I1207 06:25:26.116] Recording: run_kubectl_apply_deployments_tests
I1207 06:25:26.117] Running command: run_kubectl_apply_deployments_tests
I1207 06:25:26.137] 
... skipping 28 lines ...
I1207 06:25:28.054] apps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:28.141] apps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:28.231] apps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:28.386] deployment.extensions/nginx created
I1207 06:25:28.486] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I1207 06:25:32.687] Successful
I1207 06:25:32.687] message:Error from server (Conflict): error when applying patch:
I1207 06:25:32.688] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544163926-9468\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1207 06:25:32.688] to:
I1207 06:25:32.688] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I1207 06:25:32.688] Name: "nginx", Namespace: "namespace-1544163926-9468"
I1207 06:25:32.690] Object: &{map["metadata":map["namespace":"namespace-1544163926-9468" "uid":"e3c48719-f9e8-11e8-96a9-0242ac110002" "resourceVersion":"707" "generation":'\x01' "creationTimestamp":"2018-12-07T06:25:28Z" "name":"nginx" "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544163926-9468\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1544163926-9468/deployments/nginx"] "spec":map["replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647)] "status":map["conditions":[map["lastTransitionTime":"2018-12-07T06:25:28Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2018-12-07T06:25:28Z"]] "observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03'] "kind":"Deployment" "apiVersion":"extensions/v1beta1"]}
I1207 06:25:32.690] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I1207 06:25:32.690] has:Error from server (Conflict)
W1207 06:25:32.790] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1207 06:25:32.791] I1207 06:25:23.607002   52326 controller.go:608] quota admission added evaluator for: jobs.batch
W1207 06:25:32.791] I1207 06:25:23.619704   55703 event.go:221] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1544163923-2678", Name:"pi", UID:"e0eba4ee-f9e8-11e8-96a9-0242ac110002", APIVersion:"batch/v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-rs5l5
W1207 06:25:32.791] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1207 06:25:32.791] I1207 06:25:24.164391   52326 controller.go:608] quota admission added evaluator for: deployments.apps
W1207 06:25:32.791] I1207 06:25:24.170560   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544163923-2678", Name:"nginx-extensions", UID:"e140aedc-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-extensions-6fb4b564f5 to 1
... skipping 5 lines ...
W1207 06:25:32.793] I1207 06:25:24.908244   52326 controller.go:608] quota admission added evaluator for: cronjobs.batch
W1207 06:25:32.793] I1207 06:25:26.708953   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544163926-9468", Name:"my-depl", UID:"e2c42329-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-559b7bc95d to 1
W1207 06:25:32.793] I1207 06:25:26.713042   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163926-9468", Name:"my-depl-559b7bc95d", UID:"e2c4a590-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-559b7bc95d-lsmbr
W1207 06:25:32.793] I1207 06:25:27.212432   52326 controller.go:608] quota admission added evaluator for: replicasets.extensions
W1207 06:25:32.793] I1207 06:25:27.217042   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544163926-9468", Name:"my-depl", UID:"e2c42329-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-6676598dcb to 1
W1207 06:25:32.794] I1207 06:25:27.218916   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163926-9468", Name:"my-depl-6676598dcb", UID:"e3124150-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-6676598dcb-df4gl
W1207 06:25:32.794] E1207 06:25:27.866803   55703 replica_set.go:450] Sync "namespace-1544163926-9468/my-depl-559b7bc95d" failed with replicasets.apps "my-depl-559b7bc95d" not found
W1207 06:25:32.794] I1207 06:25:28.389877   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544163926-9468", Name:"nginx", UID:"e3c48719-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W1207 06:25:32.794] I1207 06:25:28.393329   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163926-9468", Name:"nginx-5d56d6b95f", UID:"e3c51825-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-fq8n9
W1207 06:25:32.794] I1207 06:25:28.396475   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163926-9468", Name:"nginx-5d56d6b95f", UID:"e3c51825-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-6lpf9
W1207 06:25:32.795] I1207 06:25:28.396548   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163926-9468", Name:"nginx-5d56d6b95f", UID:"e3c51825-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-pdg57
W1207 06:25:36.910] E1207 06:25:36.910178   55703 replica_set.go:450] Sync "namespace-1544163926-9468/nginx-5d56d6b95f" failed with replicasets.apps "nginx-5d56d6b95f" not found
I1207 06:25:37.893] deployment.extensions/nginx configured
I1207 06:25:37.987] Successful
I1207 06:25:37.987] message:        "name": "nginx2"
I1207 06:25:37.987]           "name": "nginx2"
I1207 06:25:37.988] has:"name": "nginx2"
W1207 06:25:38.088] I1207 06:25:37.896965   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544163926-9468", Name:"nginx", UID:"e96f26c4-f9e8-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
... skipping 82 lines ...
I1207 06:25:44.468] +++ [1207 06:25:44] Creating namespace namespace-1544163944-6973
I1207 06:25:44.537] namespace/namespace-1544163944-6973 created
I1207 06:25:44.606] Context "test" modified.
I1207 06:25:44.614] +++ [1207 06:25:44] Testing kubectl get
I1207 06:25:44.704] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:44.786] Successful
I1207 06:25:44.786] message:Error from server (NotFound): pods "abc" not found
I1207 06:25:44.786] has:pods "abc" not found
I1207 06:25:44.872] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:44.955] Successful
I1207 06:25:44.955] message:Error from server (NotFound): pods "abc" not found
I1207 06:25:44.955] has:pods "abc" not found
I1207 06:25:45.047] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:45.128] Successful
I1207 06:25:45.129] message:{
I1207 06:25:45.129]     "apiVersion": "v1",
I1207 06:25:45.129]     "items": [],
... skipping 23 lines ...
I1207 06:25:45.444] has not:No resources found
I1207 06:25:45.520] Successful
I1207 06:25:45.521] message:NAME
I1207 06:25:45.521] has not:No resources found
I1207 06:25:45.603] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:45.728] Successful
I1207 06:25:45.729] message:error: the server doesn't have a resource type "foobar"
I1207 06:25:45.729] has not:No resources found
I1207 06:25:45.808] Successful
I1207 06:25:45.809] message:No resources found.
I1207 06:25:45.809] has:No resources found
I1207 06:25:45.887] Successful
I1207 06:25:45.888] message:
I1207 06:25:45.888] has not:No resources found
I1207 06:25:45.967] Successful
I1207 06:25:45.968] message:No resources found.
I1207 06:25:45.968] has:No resources found
I1207 06:25:46.051] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:46.132] Successful
I1207 06:25:46.132] message:Error from server (NotFound): pods "abc" not found
I1207 06:25:46.132] has:pods "abc" not found
I1207 06:25:46.134] FAIL!
I1207 06:25:46.134] message:Error from server (NotFound): pods "abc" not found
I1207 06:25:46.134] has not:List
I1207 06:25:46.134] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1207 06:25:46.240] Successful
I1207 06:25:46.241] message:I1207 06:25:46.192017   67953 loader.go:359] Config loaded from file /tmp/tmp.oiTfaE2xn0/.kube/config
I1207 06:25:46.241] I1207 06:25:46.192528   67953 loader.go:359] Config loaded from file /tmp/tmp.oiTfaE2xn0/.kube/config
I1207 06:25:46.241] I1207 06:25:46.193831   67953 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I1207 06:25:49.686] }
I1207 06:25:49.772] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 06:25:50.013] <no value>Successful
I1207 06:25:50.013] message:valid-pod:
I1207 06:25:50.013] has:valid-pod:
I1207 06:25:50.095] Successful
I1207 06:25:50.095] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1207 06:25:50.095] 	template was:
I1207 06:25:50.095] 		{.missing}
I1207 06:25:50.095] 	object given to jsonpath engine was:
I1207 06:25:50.096] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/namespace-1544163949-20464/pods/valid-pod", "uid":"f0691ad1-f9e8-11e8-96a9-0242ac110002", "resourceVersion":"800", "creationTimestamp":"2018-12-07T06:25:49Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1544163949-20464"}, "spec":map[string]interface {}{"enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"memory":"512Mi", "cpu":"1"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0}, "status":map[string]interface {}{"qosClass":"Guaranteed", "phase":"Pending"}}
I1207 06:25:50.096] has:missing is not found
I1207 06:25:50.179] Successful
I1207 06:25:50.179] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1207 06:25:50.179] 	template was:
I1207 06:25:50.179] 		{{.missing}}
I1207 06:25:50.179] 	raw data was:
I1207 06:25:50.180] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2018-12-07T06:25:49Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1544163949-20464","resourceVersion":"800","selfLink":"/api/v1/namespaces/namespace-1544163949-20464/pods/valid-pod","uid":"f0691ad1-f9e8-11e8-96a9-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1207 06:25:50.180] 	object given to template engine was:
I1207 06:25:50.180] 		map[kind:Pod metadata:map[namespace:namespace-1544163949-20464 resourceVersion:800 selfLink:/api/v1/namespaces/namespace-1544163949-20464/pods/valid-pod uid:f0691ad1-f9e8-11e8-96a9-0242ac110002 creationTimestamp:2018-12-07T06:25:49Z labels:map[name:valid-pod] name:valid-pod] spec:map[priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[requests:map[cpu:1 memory:512Mi] limits:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true] status:map[qosClass:Guaranteed phase:Pending] apiVersion:v1]
I1207 06:25:50.181] has:map has no entry for key "missing"
W1207 06:25:50.281] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W1207 06:25:51.255] E1207 06:25:51.254592   68338 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I1207 06:25:51.355] Successful
I1207 06:25:51.356] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 06:25:51.356] valid-pod   0/1     Pending   0          1s
I1207 06:25:51.356] has:STATUS
I1207 06:25:51.356] Successful
... skipping 80 lines ...
I1207 06:25:53.528]   terminationGracePeriodSeconds: 30
I1207 06:25:53.528] status:
I1207 06:25:53.528]   phase: Pending
I1207 06:25:53.528]   qosClass: Guaranteed
I1207 06:25:53.528] has:name: valid-pod
I1207 06:25:53.528] Successful
I1207 06:25:53.528] message:Error from server (NotFound): pods "invalid-pod" not found
I1207 06:25:53.528] has:"invalid-pod" not found
I1207 06:25:53.594] pod "valid-pod" deleted
I1207 06:25:53.694] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:25:53.857] pod/redis-master created
I1207 06:25:53.861] pod/valid-pod created
I1207 06:25:53.959] Successful
... skipping 305 lines ...
I1207 06:25:58.070] Running command: run_create_secret_tests
I1207 06:25:58.091] 
I1207 06:25:58.093] +++ Running case: test-cmd.run_create_secret_tests 
I1207 06:25:58.095] +++ working dir: /go/src/k8s.io/kubernetes
I1207 06:25:58.098] +++ command: run_create_secret_tests
I1207 06:25:58.188] Successful
I1207 06:25:58.188] message:Error from server (NotFound): secrets "mysecret" not found
I1207 06:25:58.189] has:secrets "mysecret" not found
W1207 06:25:58.289] I1207 06:25:57.249281   52326 clientconn.go:551] parsed scheme: ""
W1207 06:25:58.290] I1207 06:25:57.249320   52326 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 06:25:58.290] I1207 06:25:57.249388   52326 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 06:25:58.290] I1207 06:25:57.249506   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:25:58.290] I1207 06:25:57.250038   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:25:58.290] No resources found.
W1207 06:25:58.290] No resources found.
I1207 06:25:58.391] Successful
I1207 06:25:58.391] message:Error from server (NotFound): secrets "mysecret" not found
I1207 06:25:58.391] has:secrets "mysecret" not found
I1207 06:25:58.392] Successful
I1207 06:25:58.392] message:user-specified
I1207 06:25:58.392] has:user-specified
I1207 06:25:58.416] Successful
I1207 06:25:58.489] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"f5b5a2c3-f9e8-11e8-96a9-0242ac110002","resourceVersion":"875","creationTimestamp":"2018-12-07T06:25:58Z"}}
... skipping 80 lines ...
I1207 06:26:00.390] has:Timeout exceeded while reading body
I1207 06:26:00.469] Successful
I1207 06:26:00.469] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 06:26:00.470] valid-pod   0/1     Pending   0          1s
I1207 06:26:00.470] has:valid-pod
I1207 06:26:00.537] Successful
I1207 06:26:00.538] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1207 06:26:00.538] has:Invalid timeout value
I1207 06:26:00.615] pod "valid-pod" deleted
I1207 06:26:00.636] +++ exit code: 0
I1207 06:26:00.670] Recording: run_crd_tests
I1207 06:26:00.671] Running command: run_crd_tests
I1207 06:26:00.691] 
... skipping 166 lines ...
I1207 06:26:04.956] foo.company.com/test patched
I1207 06:26:05.048] crd.sh:237: Successful get foos/test {{.patched}}: value1
I1207 06:26:05.131] foo.company.com/test patched
I1207 06:26:05.222] crd.sh:239: Successful get foos/test {{.patched}}: value2
I1207 06:26:05.303] foo.company.com/test patched
I1207 06:26:05.396] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I1207 06:26:05.548] +++ [1207 06:26:05] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1207 06:26:05.613] {
I1207 06:26:05.613]     "apiVersion": "company.com/v1",
I1207 06:26:05.613]     "kind": "Foo",
I1207 06:26:05.613]     "metadata": {
I1207 06:26:05.613]         "annotations": {
I1207 06:26:05.613]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W1207 06:26:07.180] I1207 06:26:03.296095   52326 controller.go:608] quota admission added evaluator for: foos.company.com
W1207 06:26:07.181] I1207 06:26:06.818954   52326 controller.go:608] quota admission added evaluator for: bars.company.com
W1207 06:26:07.181] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 70824 Killed                  while [ ${tries} -lt 10 ]; do
W1207 06:26:07.181]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W1207 06:26:07.181] done
W1207 06:26:07.182] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 70823 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W1207 06:26:15.807] E1207 06:26:15.806386   55703 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos"]
W1207 06:26:15.977] I1207 06:26:15.977246   55703 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 06:26:15.978] I1207 06:26:15.978470   52326 clientconn.go:551] parsed scheme: ""
W1207 06:26:15.979] I1207 06:26:15.978496   52326 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 06:26:15.979] I1207 06:26:15.978531   52326 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 06:26:15.980] I1207 06:26:15.978575   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:26:15.980] I1207 06:26:15.979124   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 81 lines ...
I1207 06:26:27.997] +++ [1207 06:26:27] Testing cmd with image
I1207 06:26:28.089] Successful
I1207 06:26:28.089] message:deployment.apps/test1 created
I1207 06:26:28.090] has:deployment.apps/test1 created
I1207 06:26:28.165] deployment.extensions "test1" deleted
I1207 06:26:28.241] Successful
I1207 06:26:28.241] message:error: Invalid image name "InvalidImageName": invalid reference format
I1207 06:26:28.241] has:error: Invalid image name "InvalidImageName": invalid reference format
I1207 06:26:28.253] +++ exit code: 0
I1207 06:26:28.286] Recording: run_recursive_resources_tests
I1207 06:26:28.286] Running command: run_recursive_resources_tests
I1207 06:26:28.307] 
I1207 06:26:28.308] +++ Running case: test-cmd.run_recursive_resources_tests 
I1207 06:26:28.310] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I1207 06:26:28.466] Context "test" modified.
I1207 06:26:28.560] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:26:28.825] generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:28.827] Successful
I1207 06:26:28.828] message:pod/busybox0 created
I1207 06:26:28.828] pod/busybox1 created
I1207 06:26:28.828] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 06:26:28.828] has:error validating data: kind not set
I1207 06:26:28.928] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:29.100] generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1207 06:26:29.103] Successful
I1207 06:26:29.103] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:29.103] has:Object 'Kind' is missing
I1207 06:26:29.194] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:29.450] generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 06:26:29.452] Successful
I1207 06:26:29.452] message:pod/busybox0 replaced
I1207 06:26:29.452] pod/busybox1 replaced
I1207 06:26:29.452] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 06:26:29.452] has:error validating data: kind not set
I1207 06:26:29.544] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:29.637] Successful
I1207 06:26:29.637] message:Name:               busybox0
I1207 06:26:29.637] Namespace:          namespace-1544163988-12070
I1207 06:26:29.637] Priority:           0
I1207 06:26:29.637] PriorityClassName:  <none>
... skipping 159 lines ...
I1207 06:26:29.652] has:Object 'Kind' is missing
I1207 06:26:29.735] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:29.913] generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1207 06:26:29.915] Successful
I1207 06:26:29.915] message:pod/busybox0 annotated
I1207 06:26:29.915] pod/busybox1 annotated
I1207 06:26:29.916] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:29.916] has:Object 'Kind' is missing
I1207 06:26:30.004] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:30.373] generic-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 06:26:30.377] Successful
I1207 06:26:30.377] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 06:26:30.377] pod/busybox0 configured
I1207 06:26:30.378] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 06:26:30.378] pod/busybox1 configured
I1207 06:26:30.378] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 06:26:30.378] has:error validating data: kind not set
W1207 06:26:30.479] Error from server (NotFound): namespaces "non-native-resources" not found
W1207 06:26:30.480] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1207 06:26:30.480] I1207 06:26:28.076539   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544163987-31543", Name:"test1", UID:"0757ff08-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"984", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W1207 06:26:30.481] I1207 06:26:28.082847   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163987-31543", Name:"test1-fb488bd5d", UID:"0758896c-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-swfxw
I1207 06:26:30.582] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:26:30.749] deployment.extensions/nginx created
W1207 06:26:30.850] I1207 06:26:30.755335   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544163988-12070", Name:"nginx", UID:"08f01b41-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1010", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
... skipping 51 lines ...
I1207 06:26:31.507] deployment.extensions "nginx" deleted
I1207 06:26:31.563] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:31.810] generic-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:31.813] Successful
I1207 06:26:31.814] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1207 06:26:31.814] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 06:26:31.815] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:31.815] has:Object 'Kind' is missing
I1207 06:26:31.959] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:32.094] Successful
I1207 06:26:32.095] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:32.095] has:busybox0:busybox1:
I1207 06:26:32.097] Successful
I1207 06:26:32.098] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:32.098] has:Object 'Kind' is missing
W1207 06:26:32.199] I1207 06:26:32.164155   55703 namespace_controller.go:171] Namespace has been deleted non-native-resources
I1207 06:26:32.300] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:32.387] pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:32.521] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1207 06:26:32.525] Successful
I1207 06:26:32.526] message:pod/busybox0 labeled
I1207 06:26:32.526] pod/busybox1 labeled
I1207 06:26:32.526] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:32.527] has:Object 'Kind' is missing
I1207 06:26:32.669] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:32.795] pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:32.935] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1207 06:26:32.938] Successful
I1207 06:26:32.938] message:pod/busybox0 patched
I1207 06:26:32.938] pod/busybox1 patched
I1207 06:26:32.939] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:32.939] has:Object 'Kind' is missing
I1207 06:26:33.076] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:33.354] generic-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:26:33.358] Successful
I1207 06:26:33.358] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 06:26:33.358] pod "busybox0" force deleted
I1207 06:26:33.359] pod "busybox1" force deleted
I1207 06:26:33.359] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 06:26:33.359] has:Object 'Kind' is missing
I1207 06:26:33.500] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:26:33.735] replicationcontroller/busybox0 created
I1207 06:26:33.741] replicationcontroller/busybox1 created
W1207 06:26:33.842] I1207 06:26:33.739908   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544163988-12070", Name:"busybox0", UID:"0ab7bb4e-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1042", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-pzshp
W1207 06:26:33.843] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 06:26:33.844] I1207 06:26:33.746909   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544163988-12070", Name:"busybox1", UID:"0ab8ebb1-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1044", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-b6wkj
I1207 06:26:33.944] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:34.033] generic-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:34.172] generic-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 06:26:34.313] generic-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 06:26:34.599] generic-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 06:26:34.748] generic-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 06:26:34.751] Successful
I1207 06:26:34.752] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1207 06:26:34.752] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1207 06:26:34.753] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:34.753] has:Object 'Kind' is missing
I1207 06:26:34.881] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1207 06:26:35.009] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1207 06:26:35.164] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:35.309] generic-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 06:26:35.442] generic-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 06:26:35.626] generic-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 06:26:35.713] generic-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 06:26:35.716] Successful
I1207 06:26:35.716] message:service/busybox0 exposed
I1207 06:26:35.716] service/busybox1 exposed
I1207 06:26:35.716] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:35.717] has:Object 'Kind' is missing
I1207 06:26:35.804] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:35.893] generic-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 06:26:35.982] generic-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 06:26:36.179] generic-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I1207 06:26:36.267] generic-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I1207 06:26:36.269] Successful
I1207 06:26:36.269] message:replicationcontroller/busybox0 scaled
I1207 06:26:36.270] replicationcontroller/busybox1 scaled
I1207 06:26:36.270] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:36.270] has:Object 'Kind' is missing
I1207 06:26:36.360] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 06:26:36.547] generic-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:26:36.549] Successful
I1207 06:26:36.549] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 06:26:36.550] replicationcontroller "busybox0" force deleted
I1207 06:26:36.550] replicationcontroller "busybox1" force deleted
I1207 06:26:36.550] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:36.550] has:Object 'Kind' is missing
I1207 06:26:36.644] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:26:36.795] deployment.extensions/nginx1-deployment created
I1207 06:26:36.799] deployment.extensions/nginx0-deployment created
I1207 06:26:36.904] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1207 06:26:36.994] generic-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 06:26:37.198] generic-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 06:26:37.200] Successful
I1207 06:26:37.200] message:deployment.extensions/nginx1-deployment skipped rollback (current template already matches revision 1)
I1207 06:26:37.201] deployment.extensions/nginx0-deployment skipped rollback (current template already matches revision 1)
I1207 06:26:37.201] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 06:26:37.201] has:Object 'Kind' is missing
I1207 06:26:37.285] deployment.extensions/nginx1-deployment paused
I1207 06:26:37.289] deployment.extensions/nginx0-deployment paused
I1207 06:26:37.391] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1207 06:26:37.393] Successful
I1207 06:26:37.393] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 4 lines ...
I1207 06:26:37.588] Successful
I1207 06:26:37.589] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 06:26:37.589] has:Object 'Kind' is missing
W1207 06:26:37.689] I1207 06:26:36.075720   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544163988-12070", Name:"busybox0", UID:"0ab7bb4e-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1063", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-w275q
W1207 06:26:37.690] I1207 06:26:36.085500   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544163988-12070", Name:"busybox1", UID:"0ab8ebb1-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1067", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-x4cv7
W1207 06:26:37.690] I1207 06:26:36.798868   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544163988-12070", Name:"nginx1-deployment", UID:"0c8adcab-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1084", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W1207 06:26:37.691] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 06:26:37.691] I1207 06:26:36.802511   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163988-12070", Name:"nginx1-deployment-75f6fc6747", UID:"0c8b71de-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1085", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-vmj9d
W1207 06:26:37.691] I1207 06:26:36.804580   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544163988-12070", Name:"nginx0-deployment", UID:"0c8b9d74-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W1207 06:26:37.691] I1207 06:26:36.806223   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163988-12070", Name:"nginx1-deployment-75f6fc6747", UID:"0c8b71de-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1085", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-gzb8h
W1207 06:26:37.692] I1207 06:26:36.807678   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163988-12070", Name:"nginx0-deployment-b6bb4ccbb", UID:"0c8c4d6e-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1090", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-vx5nx
W1207 06:26:37.692] I1207 06:26:36.810062   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544163988-12070", Name:"nginx0-deployment-b6bb4ccbb", UID:"0c8c4d6e-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1090", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-qrnmd
W1207 06:26:37.774] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 06:26:37.789] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 06:26:37.890] Successful
I1207 06:26:37.890] message:deployment.extensions/nginx1-deployment 
I1207 06:26:37.890] REVISION  CHANGE-CAUSE
I1207 06:26:37.890] 1         <none>
I1207 06:26:37.890] 
I1207 06:26:37.891] deployment.extensions/nginx0-deployment 
I1207 06:26:37.891] REVISION  CHANGE-CAUSE
I1207 06:26:37.891] 1         <none>
I1207 06:26:37.891] 
I1207 06:26:37.891] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 06:26:37.891] has:nginx0-deployment
I1207 06:26:37.891] Successful
I1207 06:26:37.892] message:deployment.extensions/nginx1-deployment 
I1207 06:26:37.892] REVISION  CHANGE-CAUSE
I1207 06:26:37.892] 1         <none>
I1207 06:26:37.892] 
I1207 06:26:37.892] deployment.extensions/nginx0-deployment 
I1207 06:26:37.892] REVISION  CHANGE-CAUSE
I1207 06:26:37.892] 1         <none>
I1207 06:26:37.892] 
I1207 06:26:37.892] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 06:26:37.893] has:nginx1-deployment
I1207 06:26:37.893] Successful
I1207 06:26:37.893] message:deployment.extensions/nginx1-deployment 
I1207 06:26:37.893] REVISION  CHANGE-CAUSE
I1207 06:26:37.893] 1         <none>
I1207 06:26:37.893] 
I1207 06:26:37.893] deployment.extensions/nginx0-deployment 
I1207 06:26:37.893] REVISION  CHANGE-CAUSE
I1207 06:26:37.893] 1         <none>
I1207 06:26:37.893] 
I1207 06:26:37.894] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 06:26:37.894] has:Object 'Kind' is missing
I1207 06:26:37.894] deployment.extensions "nginx1-deployment" force deleted
I1207 06:26:37.894] deployment.extensions "nginx0-deployment" force deleted
I1207 06:26:38.888] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:26:39.048] replicationcontroller/busybox0 created
I1207 06:26:39.052] replicationcontroller/busybox1 created
... skipping 7 lines ...
I1207 06:26:39.250] message:no rollbacker has been implemented for "ReplicationController"
I1207 06:26:39.250] no rollbacker has been implemented for "ReplicationController"
I1207 06:26:39.250] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:39.250] has:Object 'Kind' is missing
I1207 06:26:39.340] Successful
I1207 06:26:39.341] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:39.341] error: replicationcontrollers "busybox0" pausing is not supported
I1207 06:26:39.341] error: replicationcontrollers "busybox1" pausing is not supported
I1207 06:26:39.341] has:Object 'Kind' is missing
I1207 06:26:39.342] Successful
I1207 06:26:39.343] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:39.343] error: replicationcontrollers "busybox0" pausing is not supported
I1207 06:26:39.343] error: replicationcontrollers "busybox1" pausing is not supported
I1207 06:26:39.343] has:replicationcontrollers "busybox0" pausing is not supported
I1207 06:26:39.344] Successful
I1207 06:26:39.345] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:39.345] error: replicationcontrollers "busybox0" pausing is not supported
I1207 06:26:39.345] error: replicationcontrollers "busybox1" pausing is not supported
I1207 06:26:39.345] has:replicationcontrollers "busybox1" pausing is not supported
I1207 06:26:39.432] Successful
I1207 06:26:39.433] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:39.433] error: replicationcontrollers "busybox0" resuming is not supported
I1207 06:26:39.433] error: replicationcontrollers "busybox1" resuming is not supported
I1207 06:26:39.433] has:Object 'Kind' is missing
I1207 06:26:39.434] Successful
I1207 06:26:39.434] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:39.434] error: replicationcontrollers "busybox0" resuming is not supported
I1207 06:26:39.435] error: replicationcontrollers "busybox1" resuming is not supported
I1207 06:26:39.435] has:replicationcontrollers "busybox0" resuming is not supported
I1207 06:26:39.437] Successful
I1207 06:26:39.437] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:39.437] error: replicationcontrollers "busybox0" resuming is not supported
I1207 06:26:39.437] error: replicationcontrollers "busybox1" resuming is not supported
I1207 06:26:39.438] has:replicationcontrollers "busybox0" resuming is not supported
I1207 06:26:39.513] replicationcontroller "busybox0" force deleted
I1207 06:26:39.517] replicationcontroller "busybox1" force deleted
W1207 06:26:39.618] I1207 06:26:39.051643   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544163988-12070", Name:"busybox0", UID:"0de2baf7-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1129", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-fb9sv
W1207 06:26:39.618] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 06:26:39.618] I1207 06:26:39.054790   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544163988-12070", Name:"busybox1", UID:"0de36ab7-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1131", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jdczj
W1207 06:26:39.618] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 06:26:39.619] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 06:26:40.539] +++ exit code: 0
I1207 06:26:40.595] Recording: run_namespace_tests
I1207 06:26:40.596] Running command: run_namespace_tests
I1207 06:26:40.617] 
I1207 06:26:40.619] +++ Running case: test-cmd.run_namespace_tests 
I1207 06:26:40.621] +++ working dir: /go/src/k8s.io/kubernetes
I1207 06:26:40.624] +++ command: run_namespace_tests
I1207 06:26:40.635] +++ [1207 06:26:40] Testing kubectl(v1:namespaces)
I1207 06:26:40.707] namespace/my-namespace created
I1207 06:26:40.800] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1207 06:26:40.875] namespace "my-namespace" deleted
W1207 06:26:45.813] E1207 06:26:45.813018   55703 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I1207 06:26:45.988] namespace/my-namespace condition met
I1207 06:26:46.074] Successful
I1207 06:26:46.075] message:Error from server (NotFound): namespaces "my-namespace" not found
I1207 06:26:46.075] has: not found
W1207 06:26:46.175] I1207 06:26:46.102536   55703 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 06:26:46.203] I1207 06:26:46.202864   55703 controller_utils.go:1034] Caches are synced for garbage collector controller
I1207 06:26:46.304] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I1207 06:26:46.304] namespace/other created
I1207 06:26:46.350] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I1207 06:26:46.440] core.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:26:46.594] pod/valid-pod created
I1207 06:26:46.694] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 06:26:46.782] core.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 06:26:46.864] Successful
I1207 06:26:46.864] message:error: a resource cannot be retrieved by name across all namespaces
I1207 06:26:46.864] has:a resource cannot be retrieved by name across all namespaces
I1207 06:26:46.956] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 06:26:47.034] pod "valid-pod" force deleted
I1207 06:26:47.127] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:26:47.203] namespace "other" deleted
W1207 06:26:47.303] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 117 lines ...
I1207 06:27:07.545] +++ command: run_client_config_tests
I1207 06:27:07.556] +++ [1207 06:27:07] Creating namespace namespace-1544164027-4392
I1207 06:27:07.622] namespace/namespace-1544164027-4392 created
I1207 06:27:07.689] Context "test" modified.
I1207 06:27:07.696] +++ [1207 06:27:07] Testing client config
I1207 06:27:07.764] Successful
I1207 06:27:07.764] message:error: stat missing: no such file or directory
I1207 06:27:07.764] has:missing: no such file or directory
I1207 06:27:07.830] Successful
I1207 06:27:07.830] message:error: stat missing: no such file or directory
I1207 06:27:07.831] has:missing: no such file or directory
I1207 06:27:07.895] Successful
I1207 06:27:07.895] message:error: stat missing: no such file or directory
I1207 06:27:07.895] has:missing: no such file or directory
I1207 06:27:07.961] Successful
I1207 06:27:07.961] message:Error in configuration: context was not found for specified context: missing-context
I1207 06:27:07.961] has:context was not found for specified context: missing-context
I1207 06:27:08.027] Successful
I1207 06:27:08.027] message:error: no server found for cluster "missing-cluster"
I1207 06:27:08.028] has:no server found for cluster "missing-cluster"
I1207 06:27:08.095] Successful
I1207 06:27:08.095] message:error: auth info "missing-user" does not exist
I1207 06:27:08.095] has:auth info "missing-user" does not exist
I1207 06:27:08.227] Successful
I1207 06:27:08.227] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1207 06:27:08.228] has:Error loading config file
I1207 06:27:08.295] Successful
I1207 06:27:08.295] message:error: stat missing-config: no such file or directory
I1207 06:27:08.296] has:no such file or directory
I1207 06:27:08.309] +++ exit code: 0
I1207 06:27:08.340] Recording: run_service_accounts_tests
I1207 06:27:08.341] Running command: run_service_accounts_tests
I1207 06:27:08.359] 
I1207 06:27:08.361] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 76 lines ...
I1207 06:27:15.545]                 job-name=test-job
I1207 06:27:15.545]                 run=pi
I1207 06:27:15.545] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1207 06:27:15.545] Parallelism:    1
I1207 06:27:15.545] Completions:    1
I1207 06:27:15.545] Start Time:     Fri, 07 Dec 2018 06:27:15 +0000
I1207 06:27:15.546] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1207 06:27:15.546] Pod Template:
I1207 06:27:15.546]   Labels:  controller-uid=237a3f10-f9e9-11e8-96a9-0242ac110002
I1207 06:27:15.546]            job-name=test-job
I1207 06:27:15.546]            run=pi
I1207 06:27:15.546]   Containers:
I1207 06:27:15.546]    pi:
... skipping 329 lines ...
I1207 06:27:25.006]   selector:
I1207 06:27:25.006]     role: padawan
I1207 06:27:25.006]   sessionAffinity: None
I1207 06:27:25.006]   type: ClusterIP
I1207 06:27:25.006] status:
I1207 06:27:25.006]   loadBalancer: {}
W1207 06:27:25.107] error: you must specify resources by --filename when --local is set.
W1207 06:27:25.107] Example resource specifications include:
W1207 06:27:25.107]    '-f rsrc.yaml'
W1207 06:27:25.107]    '--filename=rsrc.json'
I1207 06:27:25.208] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1207 06:27:25.335] core.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1207 06:27:25.418] service "redis-master" deleted
... skipping 93 lines ...
I1207 06:27:31.203] apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 06:27:31.290] apps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 06:27:31.394] daemonset.extensions/bind rolled back
I1207 06:27:31.485] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 06:27:31.574] apps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 06:27:31.680] Successful
I1207 06:27:31.680] message:error: unable to find specified revision 1000000 in history
I1207 06:27:31.681] has:unable to find specified revision
I1207 06:27:31.772] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 06:27:31.862] apps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 06:27:31.964] daemonset.extensions/bind rolled back
I1207 06:27:32.059] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1207 06:27:32.154] apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I1207 06:27:33.470] Namespace:    namespace-1544164052-13751
I1207 06:27:33.470] Selector:     app=guestbook,tier=frontend
I1207 06:27:33.470] Labels:       app=guestbook
I1207 06:27:33.470]               tier=frontend
I1207 06:27:33.470] Annotations:  <none>
I1207 06:27:33.470] Replicas:     3 current / 3 desired
I1207 06:27:33.470] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:33.471] Pod Template:
I1207 06:27:33.471]   Labels:  app=guestbook
I1207 06:27:33.471]            tier=frontend
I1207 06:27:33.471]   Containers:
I1207 06:27:33.471]    php-redis:
I1207 06:27:33.471]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 06:27:33.580] Namespace:    namespace-1544164052-13751
I1207 06:27:33.580] Selector:     app=guestbook,tier=frontend
I1207 06:27:33.580] Labels:       app=guestbook
I1207 06:27:33.580]               tier=frontend
I1207 06:27:33.580] Annotations:  <none>
I1207 06:27:33.580] Replicas:     3 current / 3 desired
I1207 06:27:33.581] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:33.581] Pod Template:
I1207 06:27:33.581]   Labels:  app=guestbook
I1207 06:27:33.581]            tier=frontend
I1207 06:27:33.581]   Containers:
I1207 06:27:33.581]    php-redis:
I1207 06:27:33.581]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1207 06:27:33.687] Namespace:    namespace-1544164052-13751
I1207 06:27:33.687] Selector:     app=guestbook,tier=frontend
I1207 06:27:33.687] Labels:       app=guestbook
I1207 06:27:33.688]               tier=frontend
I1207 06:27:33.688] Annotations:  <none>
I1207 06:27:33.688] Replicas:     3 current / 3 desired
I1207 06:27:33.688] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:33.688] Pod Template:
I1207 06:27:33.688]   Labels:  app=guestbook
I1207 06:27:33.688]            tier=frontend
I1207 06:27:33.688]   Containers:
I1207 06:27:33.688]    php-redis:
I1207 06:27:33.688]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 4 lines ...
I1207 06:27:33.689]       memory:  100Mi
I1207 06:27:33.689]     Environment:
I1207 06:27:33.689]       GET_HOSTS_FROM:  dns
I1207 06:27:33.689]     Mounts:            <none>
I1207 06:27:33.690]   Volumes:             <none>
I1207 06:27:33.690] 
W1207 06:27:33.793] E1207 06:27:31.399932   55703 daemon_controller.go:303] namespace-1544164049-201/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1544164049-201", SelfLink:"/apis/apps/v1/namespaces/namespace-1544164049-201/daemonsets/bind", UID:"2c426e5f-f9e9-11e8-96a9-0242ac110002", ResourceVersion:"1345", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63679760850, loc:(*time.Location)(0x66fa920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1544164049-201\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"3"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000c19840), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0042e9b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0032caba0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc000c19880), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0038699f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0042e9c10)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1207 06:27:33.796] E1207 06:27:31.972263   55703 daemon_controller.go:303] namespace-1544164049-201/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1544164049-201", SelfLink:"/apis/apps/v1/namespaces/namespace-1544164049-201/daemonsets/bind", UID:"2c426e5f-f9e9-11e8-96a9-0242ac110002", ResourceVersion:"1348", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63679760850, loc:(*time.Location)(0x66fa920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1544164049-201\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"4"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b36da0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0033d5a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00439baa0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001b36e00), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000e53a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0033d5b10)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1207 06:27:33.797] I1207 06:27:32.803809   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"2dec3873-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4qjhg
W1207 06:27:33.797] I1207 06:27:32.806375   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"2dec3873-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nnj24
W1207 06:27:33.797] I1207 06:27:32.806419   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"2dec3873-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ghtzf
W1207 06:27:33.797] I1207 06:27:33.231367   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"2e2dd19a-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mqkx8
W1207 06:27:33.798] I1207 06:27:33.234251   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"2e2dd19a-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jk9r9
W1207 06:27:33.798] I1207 06:27:33.234308   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"2e2dd19a-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hq98c
... skipping 2 lines ...
I1207 06:27:33.899] Namespace:    namespace-1544164052-13751
I1207 06:27:33.899] Selector:     app=guestbook,tier=frontend
I1207 06:27:33.899] Labels:       app=guestbook
I1207 06:27:33.899]               tier=frontend
I1207 06:27:33.899] Annotations:  <none>
I1207 06:27:33.900] Replicas:     3 current / 3 desired
I1207 06:27:33.900] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:33.900] Pod Template:
I1207 06:27:33.900]   Labels:  app=guestbook
I1207 06:27:33.900]            tier=frontend
I1207 06:27:33.900]   Containers:
I1207 06:27:33.900]    php-redis:
I1207 06:27:33.900]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1207 06:27:33.938] Namespace:    namespace-1544164052-13751
I1207 06:27:33.938] Selector:     app=guestbook,tier=frontend
I1207 06:27:33.938] Labels:       app=guestbook
I1207 06:27:33.938]               tier=frontend
I1207 06:27:33.938] Annotations:  <none>
I1207 06:27:33.938] Replicas:     3 current / 3 desired
I1207 06:27:33.938] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:33.938] Pod Template:
I1207 06:27:33.938]   Labels:  app=guestbook
I1207 06:27:33.939]            tier=frontend
I1207 06:27:33.939]   Containers:
I1207 06:27:33.939]    php-redis:
I1207 06:27:33.939]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 06:27:34.044] Namespace:    namespace-1544164052-13751
I1207 06:27:34.044] Selector:     app=guestbook,tier=frontend
I1207 06:27:34.044] Labels:       app=guestbook
I1207 06:27:34.044]               tier=frontend
I1207 06:27:34.045] Annotations:  <none>
I1207 06:27:34.045] Replicas:     3 current / 3 desired
I1207 06:27:34.045] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:34.045] Pod Template:
I1207 06:27:34.045]   Labels:  app=guestbook
I1207 06:27:34.045]            tier=frontend
I1207 06:27:34.045]   Containers:
I1207 06:27:34.045]    php-redis:
I1207 06:27:34.045]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 06:27:34.151] Namespace:    namespace-1544164052-13751
I1207 06:27:34.151] Selector:     app=guestbook,tier=frontend
I1207 06:27:34.151] Labels:       app=guestbook
I1207 06:27:34.152]               tier=frontend
I1207 06:27:34.152] Annotations:  <none>
I1207 06:27:34.152] Replicas:     3 current / 3 desired
I1207 06:27:34.152] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:34.152] Pod Template:
I1207 06:27:34.152]   Labels:  app=guestbook
I1207 06:27:34.152]            tier=frontend
I1207 06:27:34.152]   Containers:
I1207 06:27:34.152]    php-redis:
I1207 06:27:34.153]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1207 06:27:34.259] Namespace:    namespace-1544164052-13751
I1207 06:27:34.259] Selector:     app=guestbook,tier=frontend
I1207 06:27:34.260] Labels:       app=guestbook
I1207 06:27:34.260]               tier=frontend
I1207 06:27:34.260] Annotations:  <none>
I1207 06:27:34.260] Replicas:     3 current / 3 desired
I1207 06:27:34.260] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:34.260] Pod Template:
I1207 06:27:34.260]   Labels:  app=guestbook
I1207 06:27:34.260]            tier=frontend
I1207 06:27:34.260]   Containers:
I1207 06:27:34.260]    php-redis:
I1207 06:27:34.260]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I1207 06:27:35.075] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I1207 06:27:35.169] core.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I1207 06:27:35.253] replicationcontroller/frontend scaled
I1207 06:27:35.347] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I1207 06:27:35.427] replicationcontroller "frontend" deleted
W1207 06:27:35.528] I1207 06:27:34.445715   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"2e2dd19a-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-mqkx8
W1207 06:27:35.528] error: Expected replicas to be 3, was 2
W1207 06:27:35.528] I1207 06:27:34.983088   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"2e2dd19a-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xv6kr
W1207 06:27:35.529] I1207 06:27:35.258730   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"2e2dd19a-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-xv6kr
W1207 06:27:35.592] I1207 06:27:35.591662   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"redis-master", UID:"2f96048a-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-rdw5w
I1207 06:27:35.692] replicationcontroller/redis-master created
I1207 06:27:35.756] replicationcontroller/redis-slave created
W1207 06:27:35.857] I1207 06:27:35.759551   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"redis-slave", UID:"2faf9597-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1411", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-wbr8b
... skipping 36 lines ...
I1207 06:27:37.350] service "expose-test-deployment" deleted
I1207 06:27:37.448] Successful
I1207 06:27:37.448] message:service/expose-test-deployment exposed
I1207 06:27:37.448] has:service/expose-test-deployment exposed
I1207 06:27:37.525] service "expose-test-deployment" deleted
I1207 06:27:37.614] Successful
I1207 06:27:37.614] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 06:27:37.614] See 'kubectl expose -h' for help and examples
I1207 06:27:37.614] has:invalid deployment: no selectors
I1207 06:27:37.698] Successful
I1207 06:27:37.698] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 06:27:37.698] See 'kubectl expose -h' for help and examples
I1207 06:27:37.698] has:invalid deployment: no selectors
I1207 06:27:37.846] deployment.extensions/nginx-deployment created
I1207 06:27:37.944] core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I1207 06:27:38.027] service/nginx-deployment exposed
I1207 06:27:38.121] core.sh:1137: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
... skipping 23 lines ...
I1207 06:27:39.685] service "frontend" deleted
I1207 06:27:39.692] service "frontend-2" deleted
I1207 06:27:39.698] service "frontend-3" deleted
I1207 06:27:39.705] service "frontend-4" deleted
I1207 06:27:39.712] service "frontend-5" deleted
I1207 06:27:39.805] Successful
I1207 06:27:39.805] message:error: cannot expose a Node
I1207 06:27:39.805] has:cannot expose
I1207 06:27:39.891] Successful
I1207 06:27:39.891] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1207 06:27:39.891] has:metadata.name: Invalid value
I1207 06:27:39.981] Successful
I1207 06:27:39.981] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I1207 06:27:42.106] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 06:27:42.199] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 06:27:42.278] horizontalpodautoscaler.autoscaling "frontend" deleted
W1207 06:27:42.378] I1207 06:27:41.678754   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"3336ed2c-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wgcn4
W1207 06:27:42.379] I1207 06:27:41.681029   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"3336ed2c-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nzlxz
W1207 06:27:42.379] I1207 06:27:41.681960   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164052-13751", Name:"frontend", UID:"3336ed2c-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tmgnp
W1207 06:27:42.379] Error: required flag(s) "max" not set
W1207 06:27:42.379] 
W1207 06:27:42.379] 
W1207 06:27:42.379] Examples:
W1207 06:27:42.380]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 06:27:42.380]   kubectl autoscale deployment foo --min=2 --max=10
W1207 06:27:42.380]   
... skipping 54 lines ...
I1207 06:27:42.592]           limits:
I1207 06:27:42.592]             cpu: 300m
I1207 06:27:42.592]           requests:
I1207 06:27:42.592]             cpu: 300m
I1207 06:27:42.592]       terminationGracePeriodSeconds: 0
I1207 06:27:42.592] status: {}
W1207 06:27:42.693] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I1207 06:27:42.825] deployment.extensions/nginx-deployment-resources created
I1207 06:27:42.920] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1207 06:27:43.009] core.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 06:27:43.095] core.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1207 06:27:43.179] deployment.extensions/nginx-deployment-resources resource requirements updated
I1207 06:27:43.271] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 80 lines ...
W1207 06:27:44.290] I1207 06:27:42.838141   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-69c96fd869", UID:"33e6d308-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-9mclp
W1207 06:27:44.291] I1207 06:27:42.838905   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-69c96fd869", UID:"33e6d308-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-lnk2j
W1207 06:27:44.291] I1207 06:27:43.182671   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources", UID:"33e646ff-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W1207 06:27:44.291] I1207 06:27:43.185318   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-6c5996c457", UID:"341cd1e6-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-t9sfb
W1207 06:27:44.291] I1207 06:27:43.188436   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources", UID:"33e646ff-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W1207 06:27:44.292] I1207 06:27:43.192951   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources", UID:"33e646ff-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1667", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 2
W1207 06:27:44.292] E1207 06:27:43.194237   55703 replica_set.go:450] Sync "namespace-1544164052-13751/nginx-deployment-resources-6c5996c457" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-resources-6c5996c457": the object has been modified; please apply your changes to the latest version and try again
W1207 06:27:44.292] I1207 06:27:43.194395   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-69c96fd869", UID:"33e6d308-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-8hlx8
W1207 06:27:44.292] I1207 06:27:43.198514   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-6c5996c457", UID:"341cd1e6-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1675", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-66gw2
W1207 06:27:44.293] error: unable to find container named redis
W1207 06:27:44.293] I1207 06:27:43.543851   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources", UID:"33e646ff-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1689", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 0
W1207 06:27:44.293] I1207 06:27:43.548957   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-69c96fd869", UID:"33e6d308-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-9mclp
W1207 06:27:44.293] I1207 06:27:43.549132   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-69c96fd869", UID:"33e6d308-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-lnk2j
W1207 06:27:44.294] I1207 06:27:43.549952   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources", UID:"33e646ff-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 2
W1207 06:27:44.294] I1207 06:27:43.553721   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-5f4579485f", UID:"3452fb1c-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-ln7x6
W1207 06:27:44.294] I1207 06:27:43.560304   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-5f4579485f", UID:"3452fb1c-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-rbv6q
W1207 06:27:44.295] I1207 06:27:43.823827   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources", UID:"33e646ff-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1716", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5f4579485f to 0
W1207 06:27:44.295] I1207 06:27:43.829154   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-5f4579485f", UID:"3452fb1c-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5f4579485f-ln7x6
W1207 06:27:44.295] I1207 06:27:43.830272   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-5f4579485f", UID:"3452fb1c-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5f4579485f-rbv6q
W1207 06:27:44.295] I1207 06:27:43.830201   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources", UID:"33e646ff-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1719", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 2
W1207 06:27:44.296] I1207 06:27:43.832542   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-ff8d89cb6", UID:"347dc3dc-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1726", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-96p7q
W1207 06:27:44.296] I1207 06:27:43.937145   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164052-13751", Name:"nginx-deployment-resources-ff8d89cb6", UID:"347dc3dc-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1726", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-f5d8b
W1207 06:27:44.296] error: you must specify resources by --filename when --local is set.
W1207 06:27:44.296] Example resource specifications include:
W1207 06:27:44.296]    '-f rsrc.yaml'
W1207 06:27:44.296]    '--filename=rsrc.json'
I1207 06:27:44.397] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1207 06:27:44.453] core.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1207 06:27:44.553] core.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I1207 06:27:45.995]                 pod-template-hash=55c9b846cc
I1207 06:27:45.995] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1207 06:27:45.995]                 deployment.kubernetes.io/max-replicas: 2
I1207 06:27:45.995]                 deployment.kubernetes.io/revision: 1
I1207 06:27:45.995] Controlled By:  Deployment/test-nginx-apps
I1207 06:27:45.995] Replicas:       1 current / 1 desired
I1207 06:27:45.996] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:45.996] Pod Template:
I1207 06:27:45.996]   Labels:  app=test-nginx-apps
I1207 06:27:45.996]            pod-template-hash=55c9b846cc
I1207 06:27:45.996]   Containers:
I1207 06:27:45.996]    nginx:
I1207 06:27:45.996]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 95 lines ...
W1207 06:27:50.058] I1207 06:27:49.586832   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-6f6bb85d9c", UID:"379b297a-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1900", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-6f6bb85d9c-xmxg2
W1207 06:27:50.059] I1207 06:27:49.590846   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx", UID:"379a608a-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1897", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 2
W1207 06:27:50.059] I1207 06:27:49.595279   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-9486b7cb7", UID:"37ec76be-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1908", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-br7kv
I1207 06:27:51.043] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 06:27:51.223] apps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 06:27:51.320] deployment.extensions/nginx rolled back
W1207 06:27:51.420] error: unable to find specified revision 1000000 in history
I1207 06:27:52.415] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 06:27:52.500] deployment.extensions/nginx paused
W1207 06:27:52.602] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I1207 06:27:52.703] deployment.extensions/nginx resumed
I1207 06:27:52.792] deployment.extensions/nginx rolled back
I1207 06:27:52.971]     deployment.kubernetes.io/revision-history: 1,3
W1207 06:27:53.149] error: desired revision (3) is different from the running revision (5)
I1207 06:27:53.302] deployment.extensions/nginx2 created
I1207 06:27:53.387] deployment.extensions "nginx2" deleted
I1207 06:27:53.487] deployment.extensions "nginx" deleted
I1207 06:27:53.582] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:27:53.738] deployment.extensions/nginx-deployment created
I1207 06:27:53.838] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 21 lines ...
W1207 06:27:55.384] I1207 06:27:53.747576   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-646d4f779d", UID:"3a67f25b-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1971", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-7tn4d
W1207 06:27:55.384] I1207 06:27:54.096371   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3a676414-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1984", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W1207 06:27:55.384] I1207 06:27:54.098864   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-85db47bbdb", UID:"3a9e13bd-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-gqptx
W1207 06:27:55.384] I1207 06:27:54.102525   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3a676414-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1984", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 06:27:55.385] I1207 06:27:54.107744   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-646d4f779d", UID:"3a67f25b-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1991", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-m8fd2
W1207 06:27:55.385] I1207 06:27:54.107943   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3a676414-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 2
W1207 06:27:55.385] E1207 06:27:54.108687   55703 replica_set.go:450] Sync "namespace-1544164064-8302/nginx-deployment-85db47bbdb" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-85db47bbdb": the object has been modified; please apply your changes to the latest version and try again
W1207 06:27:55.385] I1207 06:27:54.111946   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-85db47bbdb", UID:"3a9e13bd-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1994", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-t7dzw
W1207 06:27:55.385] error: unable to find container named "redis"
W1207 06:27:55.386] I1207 06:27:55.290552   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3a676414-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2017", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1207 06:27:55.386] I1207 06:27:55.295391   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-646d4f779d", UID:"3a67f25b-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-7tn4d
W1207 06:27:55.386] I1207 06:27:55.296332   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-646d4f779d", UID:"3a67f25b-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-fxgdj
W1207 06:27:55.386] I1207 06:27:55.296758   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3a676414-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 2
W1207 06:27:55.387] I1207 06:27:55.300936   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-dc756cc6", UID:"3b537128-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-5gw4p
W1207 06:27:55.387] I1207 06:27:55.304680   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-dc756cc6", UID:"3b537128-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-xcb6f
... skipping 21 lines ...
I1207 06:27:57.230] deployment.extensions/nginx-deployment env updated
W1207 06:27:57.331] I1207 06:27:56.767285   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3bcc0225-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2070", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 1
W1207 06:27:57.331] I1207 06:27:56.770648   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-5b795689cd", UID:"3c35b415-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-kd62m
W1207 06:27:57.332] I1207 06:27:56.773465   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3bcc0225-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2070", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 06:27:57.332] I1207 06:27:56.777965   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-646d4f779d", UID:"3bcca83e-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-79f5t
W1207 06:27:57.332] I1207 06:27:56.779229   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3bcc0225-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2072", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 2
W1207 06:27:57.333] E1207 06:27:56.779414   55703 replica_set.go:450] Sync "namespace-1544164064-8302/nginx-deployment-5b795689cd" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5b795689cd": the object has been modified; please apply your changes to the latest version and try again
W1207 06:27:57.333] I1207 06:27:56.783506   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-5b795689cd", UID:"3c35b415-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2081", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-jkr6f
W1207 06:27:57.333] I1207 06:27:56.849956   55703 horizontal.go:309] Horizontal Pod Autoscaler frontend has been deleted in namespace-1544164052-13751
W1207 06:27:57.334] I1207 06:27:57.056091   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3bcc0225-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2094", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5b795689cd to 0
W1207 06:27:57.334] I1207 06:27:57.060517   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-5b795689cd", UID:"3c35b415-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2098", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-kd62m
W1207 06:27:57.334] I1207 06:27:57.060943   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-5b795689cd", UID:"3c35b415-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2098", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-jkr6f
W1207 06:27:57.335] I1207 06:27:57.062324   55703 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment", UID:"3bcc0225-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2097", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5766b7c95b to 2
... skipping 34 lines ...
I1207 06:27:58.815] replicaset.extensions/frontend-no-cascade created
I1207 06:27:58.919] apps.sh:518: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I1207 06:27:58.922] +++ [1207 06:27:58] Deleting rs
I1207 06:27:59.004] replicaset.extensions "frontend-no-cascade" deleted
W1207 06:27:59.104] I1207 06:27:57.524073   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-5766b7c95b", UID:"3c60dde7-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2144", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5766b7c95b-wt6x9
W1207 06:27:59.105] I1207 06:27:57.573009   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164064-8302", Name:"nginx-deployment-5766b7c95b", UID:"3c60dde7-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2144", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5766b7c95b-76thc
W1207 06:27:59.105] E1207 06:27:57.970396   55703 replica_set.go:450] Sync "namespace-1544164064-8302/nginx-deployment-794dcdf6bb" failed with replicasets.apps "nginx-deployment-794dcdf6bb" not found
W1207 06:27:59.106] E1207 06:27:58.020450   55703 replica_set.go:450] Sync "namespace-1544164064-8302/nginx-deployment-65b869c68c" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-65b869c68c": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544164064-8302/nginx-deployment-65b869c68c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3c8c821f-f9e9-11e8-96a9-0242ac110002, UID in object meta: 
W1207 06:27:59.106] E1207 06:27:58.070240   55703 replica_set.go:450] Sync "namespace-1544164064-8302/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
W1207 06:27:59.106] E1207 06:27:58.119729   55703 replica_set.go:450] Sync "namespace-1544164064-8302/nginx-deployment-5766b7c95b" failed with replicasets.apps "nginx-deployment-5766b7c95b" not found
W1207 06:27:59.106] E1207 06:27:58.220594   55703 replica_set.go:450] Sync "namespace-1544164064-8302/nginx-deployment-7b8f7659b7" failed with replicasets.apps "nginx-deployment-7b8f7659b7" not found
W1207 06:27:59.107] I1207 06:27:58.394295   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164078-29904", Name:"frontend", UID:"3d2d1ad1-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-z56tb
W1207 06:27:59.107] I1207 06:27:58.397022   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164078-29904", Name:"frontend", UID:"3d2d1ad1-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b9pdw
W1207 06:27:59.107] I1207 06:27:58.421547   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164078-29904", Name:"frontend", UID:"3d2d1ad1-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zlxwb
W1207 06:27:59.107] E1207 06:27:58.619970   55703 replica_set.go:450] Sync "namespace-1544164078-29904/frontend" failed with replicasets.apps "frontend" not found
W1207 06:27:59.108] I1207 06:27:58.818433   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164078-29904", Name:"frontend-no-cascade", UID:"3d6e2451-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2195", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-6llcm
W1207 06:27:59.108] I1207 06:27:58.821540   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164078-29904", Name:"frontend-no-cascade", UID:"3d6e2451-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2195", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-sj96m
W1207 06:27:59.109] I1207 06:27:58.821633   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164078-29904", Name:"frontend-no-cascade", UID:"3d6e2451-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2195", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-hjvc2
W1207 06:27:59.109] E1207 06:27:59.070079   55703 replica_set.go:450] Sync "namespace-1544164078-29904/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
I1207 06:27:59.209] apps.sh:522: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:27:59.213] apps.sh:524: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I1207 06:27:59.298] pod "frontend-no-cascade-6llcm" deleted
I1207 06:27:59.306] pod "frontend-no-cascade-hjvc2" deleted
I1207 06:27:59.311] pod "frontend-no-cascade-sj96m" deleted
I1207 06:27:59.408] apps.sh:527: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 5 lines ...
I1207 06:27:59.878] Namespace:    namespace-1544164078-29904
I1207 06:27:59.878] Selector:     app=guestbook,tier=frontend
I1207 06:27:59.878] Labels:       app=guestbook
I1207 06:27:59.878]               tier=frontend
I1207 06:27:59.878] Annotations:  <none>
I1207 06:27:59.878] Replicas:     3 current / 3 desired
I1207 06:27:59.878] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:59.879] Pod Template:
I1207 06:27:59.879]   Labels:  app=guestbook
I1207 06:27:59.879]            tier=frontend
I1207 06:27:59.879]   Containers:
I1207 06:27:59.879]    php-redis:
I1207 06:27:59.879]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 06:27:59.986] Namespace:    namespace-1544164078-29904
I1207 06:27:59.987] Selector:     app=guestbook,tier=frontend
I1207 06:27:59.987] Labels:       app=guestbook
I1207 06:27:59.987]               tier=frontend
I1207 06:27:59.987] Annotations:  <none>
I1207 06:27:59.987] Replicas:     3 current / 3 desired
I1207 06:27:59.987] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:27:59.987] Pod Template:
I1207 06:27:59.987]   Labels:  app=guestbook
I1207 06:27:59.987]            tier=frontend
I1207 06:27:59.987]   Containers:
I1207 06:27:59.988]    php-redis:
I1207 06:27:59.988]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1207 06:28:00.092] Namespace:    namespace-1544164078-29904
I1207 06:28:00.092] Selector:     app=guestbook,tier=frontend
I1207 06:28:00.092] Labels:       app=guestbook
I1207 06:28:00.092]               tier=frontend
I1207 06:28:00.092] Annotations:  <none>
I1207 06:28:00.092] Replicas:     3 current / 3 desired
I1207 06:28:00.092] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:00.092] Pod Template:
I1207 06:28:00.092]   Labels:  app=guestbook
I1207 06:28:00.093]            tier=frontend
I1207 06:28:00.093]   Containers:
I1207 06:28:00.093]    php-redis:
I1207 06:28:00.093]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I1207 06:28:00.198] Namespace:    namespace-1544164078-29904
I1207 06:28:00.198] Selector:     app=guestbook,tier=frontend
I1207 06:28:00.198] Labels:       app=guestbook
I1207 06:28:00.198]               tier=frontend
I1207 06:28:00.198] Annotations:  <none>
I1207 06:28:00.199] Replicas:     3 current / 3 desired
I1207 06:28:00.199] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:00.199] Pod Template:
I1207 06:28:00.199]   Labels:  app=guestbook
I1207 06:28:00.199]            tier=frontend
I1207 06:28:00.199]   Containers:
I1207 06:28:00.199]    php-redis:
I1207 06:28:00.199]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 21 lines ...
I1207 06:28:00.404] Namespace:    namespace-1544164078-29904
I1207 06:28:00.404] Selector:     app=guestbook,tier=frontend
I1207 06:28:00.404] Labels:       app=guestbook
I1207 06:28:00.404]               tier=frontend
I1207 06:28:00.404] Annotations:  <none>
I1207 06:28:00.405] Replicas:     3 current / 3 desired
I1207 06:28:00.405] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:00.405] Pod Template:
I1207 06:28:00.405]   Labels:  app=guestbook
I1207 06:28:00.405]            tier=frontend
I1207 06:28:00.405]   Containers:
I1207 06:28:00.405]    php-redis:
I1207 06:28:00.405]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 06:28:00.446] Namespace:    namespace-1544164078-29904
I1207 06:28:00.446] Selector:     app=guestbook,tier=frontend
I1207 06:28:00.446] Labels:       app=guestbook
I1207 06:28:00.446]               tier=frontend
I1207 06:28:00.446] Annotations:  <none>
I1207 06:28:00.446] Replicas:     3 current / 3 desired
I1207 06:28:00.446] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:00.446] Pod Template:
I1207 06:28:00.446]   Labels:  app=guestbook
I1207 06:28:00.446]            tier=frontend
I1207 06:28:00.446]   Containers:
I1207 06:28:00.447]    php-redis:
I1207 06:28:00.447]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 06:28:00.549] Namespace:    namespace-1544164078-29904
I1207 06:28:00.549] Selector:     app=guestbook,tier=frontend
I1207 06:28:00.549] Labels:       app=guestbook
I1207 06:28:00.549]               tier=frontend
I1207 06:28:00.549] Annotations:  <none>
I1207 06:28:00.549] Replicas:     3 current / 3 desired
I1207 06:28:00.549] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:00.549] Pod Template:
I1207 06:28:00.549]   Labels:  app=guestbook
I1207 06:28:00.549]            tier=frontend
I1207 06:28:00.549]   Containers:
I1207 06:28:00.549]    php-redis:
I1207 06:28:00.549]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1207 06:28:00.656] Namespace:    namespace-1544164078-29904
I1207 06:28:00.657] Selector:     app=guestbook,tier=frontend
I1207 06:28:00.657] Labels:       app=guestbook
I1207 06:28:00.657]               tier=frontend
I1207 06:28:00.657] Annotations:  <none>
I1207 06:28:00.657] Replicas:     3 current / 3 desired
I1207 06:28:00.657] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:00.657] Pod Template:
I1207 06:28:00.657]   Labels:  app=guestbook
I1207 06:28:00.657]            tier=frontend
I1207 06:28:00.657]   Containers:
I1207 06:28:00.657]    php-redis:
I1207 06:28:00.657]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I1207 06:28:05.815] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 06:28:05.904] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 06:28:05.980] horizontalpodautoscaler.autoscaling "frontend" deleted
W1207 06:28:06.081] I1207 06:28:05.370034   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164078-29904", Name:"frontend", UID:"4155b3c5-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zbv79
W1207 06:28:06.082] I1207 06:28:05.372488   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164078-29904", Name:"frontend", UID:"4155b3c5-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9v84j
W1207 06:28:06.082] I1207 06:28:05.372763   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544164078-29904", Name:"frontend", UID:"4155b3c5-f9e9-11e8-96a9-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-782mp
W1207 06:28:06.082] Error: required flag(s) "max" not set
W1207 06:28:06.082] 
W1207 06:28:06.082] 
W1207 06:28:06.082] Examples:
W1207 06:28:06.082]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 06:28:06.082]   kubectl autoscale deployment foo --min=2 --max=10
W1207 06:28:06.082]   
... skipping 88 lines ...
I1207 06:28:09.000] apps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 06:28:09.093] apps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 06:28:09.196] statefulset.apps/nginx rolled back
I1207 06:28:09.301] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 06:28:09.391] apps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 06:28:09.497] Successful
I1207 06:28:09.497] message:error: unable to find specified revision 1000000 in history
I1207 06:28:09.497] has:unable to find specified revision
I1207 06:28:09.586] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 06:28:09.677] apps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 06:28:09.776] statefulset.apps/nginx rolled back
I1207 06:28:09.871] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I1207 06:28:09.960] apps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I1207 06:28:11.743] Name:         mock
I1207 06:28:11.743] Namespace:    namespace-1544164090-15368
I1207 06:28:11.743] Selector:     app=mock
I1207 06:28:11.743] Labels:       app=mock
I1207 06:28:11.743] Annotations:  <none>
I1207 06:28:11.743] Replicas:     1 current / 1 desired
I1207 06:28:11.744] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:11.744] Pod Template:
I1207 06:28:11.744]   Labels:  app=mock
I1207 06:28:11.744]   Containers:
I1207 06:28:11.744]    mock-container:
I1207 06:28:11.744]     Image:        k8s.gcr.io/pause:2.0
I1207 06:28:11.744]     Port:         9949/TCP
... skipping 56 lines ...
I1207 06:28:13.910] Name:         mock
I1207 06:28:13.910] Namespace:    namespace-1544164090-15368
I1207 06:28:13.910] Selector:     app=mock
I1207 06:28:13.910] Labels:       app=mock
I1207 06:28:13.910] Annotations:  <none>
I1207 06:28:13.911] Replicas:     1 current / 1 desired
I1207 06:28:13.911] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:13.911] Pod Template:
I1207 06:28:13.911]   Labels:  app=mock
I1207 06:28:13.911]   Containers:
I1207 06:28:13.911]    mock-container:
I1207 06:28:13.911]     Image:        k8s.gcr.io/pause:2.0
I1207 06:28:13.911]     Port:         9949/TCP
... skipping 56 lines ...
I1207 06:28:16.110] Name:         mock
I1207 06:28:16.111] Namespace:    namespace-1544164090-15368
I1207 06:28:16.111] Selector:     app=mock
I1207 06:28:16.111] Labels:       app=mock
I1207 06:28:16.111] Annotations:  <none>
I1207 06:28:16.111] Replicas:     1 current / 1 desired
I1207 06:28:16.111] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:16.111] Pod Template:
I1207 06:28:16.111]   Labels:  app=mock
I1207 06:28:16.111]   Containers:
I1207 06:28:16.111]    mock-container:
I1207 06:28:16.111]     Image:        k8s.gcr.io/pause:2.0
I1207 06:28:16.112]     Port:         9949/TCP
... skipping 42 lines ...
I1207 06:28:18.176] Namespace:    namespace-1544164090-15368
I1207 06:28:18.176] Selector:     app=mock
I1207 06:28:18.176] Labels:       app=mock
I1207 06:28:18.176]               status=replaced
I1207 06:28:18.176] Annotations:  <none>
I1207 06:28:18.177] Replicas:     1 current / 1 desired
I1207 06:28:18.177] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:18.177] Pod Template:
I1207 06:28:18.177]   Labels:  app=mock
I1207 06:28:18.177]   Containers:
I1207 06:28:18.177]    mock-container:
I1207 06:28:18.177]     Image:        k8s.gcr.io/pause:2.0
I1207 06:28:18.177]     Port:         9949/TCP
... skipping 11 lines ...
I1207 06:28:18.178] Namespace:    namespace-1544164090-15368
I1207 06:28:18.178] Selector:     app=mock2
I1207 06:28:18.179] Labels:       app=mock2
I1207 06:28:18.179]               status=replaced
I1207 06:28:18.179] Annotations:  <none>
I1207 06:28:18.179] Replicas:     1 current / 1 desired
I1207 06:28:18.179] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 06:28:18.179] Pod Template:
I1207 06:28:18.179]   Labels:  app=mock2
I1207 06:28:18.179]   Containers:
I1207 06:28:18.179]    mock-container:
I1207 06:28:18.179]     Image:        k8s.gcr.io/pause:2.0
I1207 06:28:18.180]     Port:         9949/TCP
... skipping 105 lines ...
I1207 06:28:22.913] Context "test" modified.
I1207 06:28:22.920] +++ [1207 06:28:22] Testing persistent volumes
W1207 06:28:23.021] I1207 06:28:20.545173   55703 horizontal.go:309] Horizontal Pod Autoscaler frontend has been deleted in namespace-1544164078-29904
W1207 06:28:23.022] I1207 06:28:22.098171   55703 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544164090-15368", Name:"mock", UID:"4b4e94cc-f9e9-11e8-96a9-0242ac110002", APIVersion:"v1", ResourceVersion:"2674", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-8szxk
I1207 06:28:23.122] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 06:28:23.214] persistentvolume/pv0001 created
W1207 06:28:23.315] E1207 06:28:23.220721   55703 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I1207 06:28:23.415] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I1207 06:28:23.416] persistentvolume "pv0001" deleted
I1207 06:28:23.582] persistentvolume/pv0002 created
I1207 06:28:23.690] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I1207 06:28:23.775] persistentvolume "pv0002" deleted
I1207 06:28:23.950] persistentvolume/pv0003 created
... skipping 480 lines ...
I1207 06:28:29.182] yes
I1207 06:28:29.182] has:the server doesn't have a resource type
I1207 06:28:29.266] Successful
I1207 06:28:29.266] message:yes
I1207 06:28:29.266] has:yes
I1207 06:28:29.351] Successful
I1207 06:28:29.352] message:error: --subresource can not be used with NonResourceURL
I1207 06:28:29.352] has:subresource can not be used with NonResourceURL
I1207 06:28:29.441] Successful
I1207 06:28:29.535] Successful
I1207 06:28:29.536] message:yes
I1207 06:28:29.536] 0
I1207 06:28:29.536] has:0
... skipping 6 lines ...
I1207 06:28:29.748] role.rbac.authorization.k8s.io/testing-R reconciled
I1207 06:28:29.856] legacy-script.sh:736: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I1207 06:28:29.953] legacy-script.sh:737: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I1207 06:28:30.057] legacy-script.sh:738: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I1207 06:28:30.161] legacy-script.sh:739: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I1207 06:28:30.250] Successful
I1207 06:28:30.251] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I1207 06:28:30.251] has:only rbac.authorization.k8s.io/v1 is supported
I1207 06:28:30.351] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I1207 06:28:30.358] role.rbac.authorization.k8s.io "testing-R" deleted
I1207 06:28:30.370] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I1207 06:28:30.380] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I1207 06:28:30.392] Recording: run_retrieve_multiple_tests
... skipping 893 lines ...
I1207 06:28:57.030] message:node/127.0.0.1 already uncordoned (dry run)
I1207 06:28:57.030] has:already uncordoned
I1207 06:28:57.118] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I1207 06:28:57.194] node/127.0.0.1 labeled
I1207 06:28:57.282] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I1207 06:28:57.350] Successful
I1207 06:28:57.350] message:error: cannot specify both a node name and a --selector option
I1207 06:28:57.350] See 'kubectl drain -h' for help and examples
I1207 06:28:57.350] has:cannot specify both a node name
I1207 06:28:57.418] Successful
I1207 06:28:57.418] message:error: USAGE: cordon NODE [flags]
I1207 06:28:57.418] See 'kubectl cordon -h' for help and examples
I1207 06:28:57.418] has:error\: USAGE\: cordon NODE
I1207 06:28:57.494] node/127.0.0.1 already uncordoned
I1207 06:28:57.567] Successful
I1207 06:28:57.568] message:error: You must provide one or more resources by argument or filename.
I1207 06:28:57.568] Example resource specifications include:
I1207 06:28:57.568]    '-f rsrc.yaml'
I1207 06:28:57.568]    '--filename=rsrc.json'
I1207 06:28:57.568]    '<resource> <name>'
I1207 06:28:57.568]    '<resource>'
I1207 06:28:57.568] has:must provide one or more resources
... skipping 15 lines ...
I1207 06:28:57.993] Successful
I1207 06:28:57.993] message:The following kubectl-compatible plugins are available:
I1207 06:28:57.993] 
I1207 06:28:57.993] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I1207 06:28:57.993]   - warning: kubectl-version overwrites existing command: "kubectl version"
I1207 06:28:57.993] 
I1207 06:28:57.994] error: one plugin warning was found
I1207 06:28:57.994] has:kubectl-version overwrites existing command: "kubectl version"
I1207 06:28:58.064] Successful
I1207 06:28:58.064] message:The following kubectl-compatible plugins are available:
I1207 06:28:58.065] 
I1207 06:28:58.065] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 06:28:58.065] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I1207 06:28:58.065]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 06:28:58.065] 
I1207 06:28:58.065] error: one plugin warning was found
I1207 06:28:58.065] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I1207 06:28:58.136] Successful
I1207 06:28:58.136] message:The following kubectl-compatible plugins are available:
I1207 06:28:58.136] 
I1207 06:28:58.136] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 06:28:58.137] has:plugins are available
I1207 06:28:58.208] Successful
I1207 06:28:58.208] message:
I1207 06:28:58.208] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I1207 06:28:58.209] error: unable to find any kubectl plugins in your PATH
I1207 06:28:58.209] has:unable to find any kubectl plugins in your PATH
I1207 06:28:58.277] Successful
I1207 06:28:58.277] message:I am plugin foo
I1207 06:28:58.277] has:plugin foo
I1207 06:28:58.348] Successful
I1207 06:28:58.349] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.901+5d76949082d149", GitCommit:"5d76949082d14918dea6d2bae668bb58512a4408", GitTreeState:"clean", BuildDate:"2018-12-07T06:22:28Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I1207 06:28:58.423] 
I1207 06:28:58.426] +++ Running case: test-cmd.run_impersonation_tests 
I1207 06:28:58.428] +++ working dir: /go/src/k8s.io/kubernetes
I1207 06:28:58.430] +++ command: run_impersonation_tests
I1207 06:28:58.440] +++ [1207 06:28:58] Testing impersonation
I1207 06:28:58.506] Successful
I1207 06:28:58.507] message:error: requesting groups or user-extra for  without impersonating a user
I1207 06:28:58.507] has:without impersonating a user
I1207 06:28:58.653] certificatesigningrequest.certificates.k8s.io/foo created
I1207 06:28:58.741] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I1207 06:28:58.827] authorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I1207 06:28:58.907] certificatesigningrequest.certificates.k8s.io "foo" deleted
I1207 06:28:59.056] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 17 lines ...
W1207 06:28:59.548] I1207 06:28:59.544024   52326 available_controller.go:326] Shutting down AvailableConditionController
W1207 06:28:59.548] I1207 06:28:59.544666   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.549] I1207 06:28:59.548818   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.549] I1207 06:28:59.544796   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.549] I1207 06:28:59.549273   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.550] I1207 06:28:59.544961   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.550] W1207 06:28:59.545033   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.550] W1207 06:28:59.545044   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.551] I1207 06:28:59.545123   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.551] I1207 06:28:59.545242   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.551] I1207 06:28:59.545291   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.551] W1207 06:28:59.545291   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.552] W1207 06:28:59.545716   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.552] I1207 06:28:59.545831   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.552] I1207 06:28:59.545864   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.552] I1207 06:28:59.545910   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.552] I1207 06:28:59.545939   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.553] I1207 06:28:59.546007   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.553] I1207 06:28:59.546032   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W1207 06:28:59.553] I1207 06:28:59.546073   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.553] I1207 06:28:59.546128   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.553] E1207 06:28:59.546155   52326 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W1207 06:28:59.554] I1207 06:28:59.546180   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.554] I1207 06:28:59.546189   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.554] I1207 06:28:59.546257   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.554] I1207 06:28:59.546294   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.554] I1207 06:28:59.546327   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.554] I1207 06:28:59.546353   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 20 lines ...
W1207 06:28:59.558] I1207 06:28:59.546982   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.558] I1207 06:28:59.550730   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.558] I1207 06:28:59.546982   52326 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W1207 06:28:59.558] I1207 06:28:59.547012   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.559] I1207 06:28:59.547029   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.559] I1207 06:28:59.550766   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.559] W1207 06:28:59.547046   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.559] W1207 06:28:59.547041   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.559] I1207 06:28:59.550600   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.560] W1207 06:28:59.547083   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.560] W1207 06:28:59.547086   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.560] W1207 06:28:59.547088   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.560] W1207 06:28:59.547106   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.561] W1207 06:28:59.547110   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.561] W1207 06:28:59.547114   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.561] I1207 06:28:59.547116   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.561] I1207 06:28:59.550896   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.562] W1207 06:28:59.547136   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.562] W1207 06:28:59.547138   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.562] W1207 06:28:59.547140   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.562] W1207 06:28:59.547143   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.562] I1207 06:28:59.547147   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.563] W1207 06:28:59.547169   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.563] W1207 06:28:59.547170   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.563] I1207 06:28:59.550983   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.563] W1207 06:28:59.547173   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.564] W1207 06:28:59.547172   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.564] I1207 06:28:59.547182   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.564] W1207 06:28:59.547195   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.564] W1207 06:28:59.551049   52326 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
W1207 06:28:59.564] W1207 06:28:59.547203   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.565] W1207 06:28:59.547201   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.565] W1207 06:28:59.547238   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.565] W1207 06:28:59.547226   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.565] W1207 06:28:59.547244   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.566] W1207 06:28:59.547272   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.566] W1207 06:28:59.547273   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.566] W1207 06:28:59.547281   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.566] W1207 06:28:59.547278   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.566] W1207 06:28:59.547292   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.567] W1207 06:28:59.547302   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.567] W1207 06:28:59.547307   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.567] W1207 06:28:59.547332   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.567] W1207 06:28:59.547335   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.568] W1207 06:28:59.547312   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.568] W1207 06:28:59.547341   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.568] W1207 06:28:59.547373   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.568] W1207 06:28:59.547402   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.568] W1207 06:28:59.547441   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.569] W1207 06:28:59.547509   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.569] I1207 06:28:59.547641   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.569] I1207 06:28:59.551460   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.569] I1207 06:28:59.547751   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.569] I1207 06:28:59.547792   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.569] I1207 06:28:59.551524   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.570] I1207 06:28:59.551524   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 27 lines ...
W1207 06:28:59.574] I1207 06:28:59.548262   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.575] I1207 06:28:59.551865   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.575] I1207 06:28:59.548276   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.575] I1207 06:28:59.548286   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.575] I1207 06:28:59.551904   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.575] I1207 06:28:59.551913   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.575] W1207 06:28:59.548352   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.576] W1207 06:28:59.548394   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.576] I1207 06:28:59.548403   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.576] I1207 06:28:59.551959   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.576] I1207 06:28:59.548402   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.576] I1207 06:28:59.551982   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.577] W1207 06:28:59.548427   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.577] W1207 06:28:59.548447   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.577] W1207 06:28:59.548447   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.578] W1207 06:28:59.548463   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.578] W1207 06:28:59.548479   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.578] W1207 06:28:59.548479   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.578] I1207 06:28:59.548487   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.578] W1207 06:28:59.548497   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.579] W1207 06:28:59.548509   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.579] W1207 06:28:59.548512   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.579] W1207 06:28:59.548533   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.579] W1207 06:28:59.548527   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.580] W1207 06:28:59.548544   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.580] W1207 06:28:59.548547   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.580] W1207 06:28:59.548562   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.581] W1207 06:28:59.548571   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.581] W1207 06:28:59.548572   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.581] W1207 06:28:59.548574   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.581] W1207 06:28:59.548601   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.582] W1207 06:28:59.548606   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.582] W1207 06:28:59.548611   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.582] W1207 06:28:59.548611   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.582] W1207 06:28:59.548647   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.583] W1207 06:28:59.548674   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.583] W1207 06:28:59.548686   52326 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 06:28:59.583] I1207 06:28:59.549573   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.583] I1207 06:28:59.549757   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.583] I1207 06:28:59.549775   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.583] I1207 06:28:59.549854   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.584] I1207 06:28:59.549867   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 06:28:59.584] I1207 06:28:59.549899   52326 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 19 lines ...
I1207 06:29:04.141] +++ [1207 06:29:04] On try 2, etcd: : http://127.0.0.1:2379
I1207 06:29:04.149] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I1207 06:29:04.152] +++ [1207 06:29:04] Running integration test cases
I1207 06:29:08.249] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1alpha1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1beta1,apps/v1beta2,apps/v1,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I1207 06:29:08.284] +++ [1207 06:29:08] Running tests without code coverage
I1207 06:32:42.275] ok  	k8s.io/kubernetes/test/integration/apimachinery	168.691s
I1207 06:32:42.275] FAIL	k8s.io/kubernetes/test/integration/apiserver	37.606s
I1207 06:32:42.275] [restful] 2018/12/07 06:31:30 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:41781/swaggerapi
I1207 06:32:42.276] [restful] 2018/12/07 06:31:30 log.go:33: [restful/swagger] https://127.0.0.1:41781/swaggerui/ is mapped to folder /swagger-ui/
I1207 06:32:42.276] [restful] 2018/12/07 06:31:32 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:41781/swaggerapi
I1207 06:32:42.276] [restful] 2018/12/07 06:31:32 log.go:33: [restful/swagger] https://127.0.0.1:41781/swaggerui/ is mapped to folder /swagger-ui/
I1207 06:32:42.276] ok  	k8s.io/kubernetes/test/integration/auth	92.620s
I1207 06:32:42.276] [restful] 2018/12/07 06:30:29 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:33935/swaggerapi
... skipping 229 lines ...
I1207 06:41:46.305] [restful] 2018/12/07 06:34:37 log.go:33: [restful/swagger] https://127.0.0.1:33071/swaggerui/ is mapped to folder /swagger-ui/
I1207 06:41:46.305] ok  	k8s.io/kubernetes/test/integration/tls	13.030s
I1207 06:41:46.305] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.103s
I1207 06:41:46.306] ok  	k8s.io/kubernetes/test/integration/volume	91.474s
I1207 06:41:46.306] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	142.242s
I1207 06:41:47.692] +++ [1207 06:41:47] Saved JUnit XML test report to /workspace/artifacts/junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-062908.xml
I1207 06:41:47.695] Makefile:184: recipe for target 'test' failed
I1207 06:41:47.705] +++ [1207 06:41:47] Cleaning up etcd
W1207 06:41:47.806] make[1]: *** [test] Error 1
W1207 06:41:47.806] !!! [1207 06:41:47] Call tree:
W1207 06:41:47.806] !!! [1207 06:41:47]  1: hack/make-rules/test-integration.sh:105 runTests(...)
I1207 06:41:47.916] +++ [1207 06:41:47] Integration test cleanup complete
I1207 06:41:47.917] Makefile:203: recipe for target 'test-integration' failed
W1207 06:41:48.017] make: *** [test-integration] Error 1
W1207 06:41:49.003] Traceback (most recent call last):
W1207 06:41:49.003]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 167, in <module>
W1207 06:41:49.003]     main(ARGS.branch, ARGS.script, ARGS.force, ARGS.prow)
W1207 06:41:49.003]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 136, in main
W1207 06:41:49.003]     check(*cmd)
W1207 06:41:49.004]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W1207 06:41:49.004]     subprocess.check_call(cmd)
W1207 06:41:49.004]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1207 06:41:49.022]     raise CalledProcessError(retcode, cmd)
W1207 06:41:49.023] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181105-ceed87206', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E1207 06:41:49.028] Command failed
I1207 06:41:49.028] process 685 exited with code 1 after 24.8m
E1207 06:41:49.029] FAIL: pull-kubernetes-integration
I1207 06:41:49.029] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1207 06:41:49.474] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1207 06:41:49.517] process 123904 exited with code 0 after 0.0m
I1207 06:41:49.518] Call:  gcloud config get-value account
I1207 06:41:49.790] process 123917 exited with code 0 after 0.0m
I1207 06:41:49.790] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 06:41:49.790] Upload result and artifacts...
I1207 06:41:49.790] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/71684/pull-kubernetes-integration/37834
I1207 06:41:49.791] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/71684/pull-kubernetes-integration/37834/artifacts
W1207 06:41:51.403] CommandException: One or more URLs matched no objects.
E1207 06:41:51.612] Command failed
I1207 06:41:51.612] process 123930 exited with code 1 after 0.0m
W1207 06:41:51.612] Remote dir gs://kubernetes-jenkins/pr-logs/pull/71684/pull-kubernetes-integration/37834/artifacts not exist yet
I1207 06:41:51.612] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/71684/pull-kubernetes-integration/37834/artifacts
I1207 06:41:55.176] process 124075 exited with code 0 after 0.1m
W1207 06:41:55.177] metadata path /workspace/_artifacts/metadata.json does not exist
W1207 06:41:55.177] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...