PRdanielqsj: Fix typos like limitting
ResultFAILURE
Tests 1 failed / 578 succeeded
Started2018-12-07 05:34
Elapsed27m33s
Versionv1.14.0-alpha.0.901+5d76949082d149
Buildergke-prow-default-pool-3c8994a8-vv4p
Refs master:cd5f41ec
71684:3c055aa4
podade5ea60-f9e1-11e8-92b6-0a580a6c0310
infra-commitd6f7bb8bf
podade5ea60-f9e1-11e8-92b6-0a580a6c0310
repok8s.io/kubernetes
repo-commit5d76949082d14918dea6d2bae668bb58512a4408
repos{u'k8s.io/kubernetes': u'master:cd5f41ec1ad45c831df38d986a682e5580eaead7,71684:3c055aa4b47232bf7d6b5d5a0901dae239e33c59'}

Test Failures


k8s.io/kubernetes/test/integration/evictions TestTerminalPodEviction 5.49s

go test -v k8s.io/kubernetes/test/integration/evictions -run TestTerminalPodEviction$
I1207 05:51:16.044357  116758 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I1207 05:51:16.044389  116758 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I1207 05:51:16.044400  116758 master.go:228] Using reconciler: 
I1207 05:51:16.046542  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.046591  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.046675  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.046805  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.047968  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.049644  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.050034  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.050117  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.050295  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.050662  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.050682  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.050707  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.050790  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.050900  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.051403  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.051820  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.051839  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.051872  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.051972  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.052899  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.053313  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.053338  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.053370  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.053414  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.053696  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.054129  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.054167  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.054218  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.054265  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.054509  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.055025  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.055041  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.055069  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.055121  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.056555  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.056871  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.056906  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.056936  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.057363  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.058063  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.058092  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.058123  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.058204  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.058438  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.059135  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.059220  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.059291  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.059421  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.059626  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.059977  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.060207  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.060224  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.060253  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.060294  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.060781  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.060914  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.060930  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.060959  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.061010  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.061498  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.062091  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.062129  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.062181  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.062216  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.062950  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.063187  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.063203  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.063248  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.063343  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.063810  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.063830  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.063857  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.063858  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.063917  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.064188  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.064195  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.064215  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.064254  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.064456  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.064808  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.064938  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.064963  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.065003  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.065075  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.065412  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.077303  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.077337  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.077381  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.077486  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.078521  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.078550  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.078609  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.078751  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.079077  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.079658  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.079675  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.079712  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.079807  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.080074  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.080684  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.080700  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.080756  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.080839  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.080999  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.081606  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.081628  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.081661  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.081764  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.081949  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.082979  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.083411  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.083433  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.083481  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.083520  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.084677  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.085057  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.085082  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.085112  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.085195  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.085413  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.085712  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.085750  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.085782  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.085844  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.086174  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.086473  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.086488  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.086515  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.086559  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.086776  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.087204  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.087218  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.087244  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.087283  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.087549  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.087845  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.087859  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.087887  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.087943  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.088200  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.088536  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.088551  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.088577  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.088635  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.089189  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.089209  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.089247  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.089335  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.089565  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.090183  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.090200  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.090236  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.090318  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.090453  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.091533  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.091928  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.091944  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.091999  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.092405  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.092915  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.093058  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.093074  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.093105  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.093174  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.093433  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.093739  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.093761  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.093794  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.093852  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.094087  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.094313  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.094334  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.094366  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.094453  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.094690  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.095016  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.095033  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.095064  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.095137  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.095636  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.095652  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.095679  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.095778  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.096067  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.096688  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.096995  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.097011  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.097040  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.097109  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.097493  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.097581  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.097593  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.097622  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.097671  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.097893  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.098266  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.098282  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.098319  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.098375  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.098567  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.098797  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.098817  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.098844  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.098942  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.099268  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.099964  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.099989  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.100026  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.100089  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.100891  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.103074  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.103097  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.103132  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.103212  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.103642  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.104078  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.104100  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.104137  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.104211  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.104572  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.109335  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.109378  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.109733  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.110777  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.114675  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.114768  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.114827  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.115195  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.115462  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.117572  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.118388  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.118416  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.118790  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.119004  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.120729  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.122321  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.122358  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.122428  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.122859  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.124033  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.124083  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.124122  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.125020  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.125123  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.128187  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.128502  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.129329  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.129453  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.129583  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.130104  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.130664  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.130681  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.130738  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.130914  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.131763  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.132567  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.132638  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.132725  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.132834  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.134976  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.135573  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.135595  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.135633  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.135835  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.137450  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.137835  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.137925  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.138562  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.138657  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.139447  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.140781  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.140805  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.140842  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.140893  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.141444  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.141743  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.141765  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.141797  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.141880  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.142461  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.142965  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.143019  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.143063  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.143129  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.143628  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.144114  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.144188  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.144243  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.144317  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.145115  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.145531  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.145555  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.145602  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.145679  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.146038  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.146431  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.146454  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.146483  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.146534  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.165871  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.166357  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.166436  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.166505  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.166619  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.167449  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.167709  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.167751  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.167788  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.167837  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.168115  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.169186  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:16.169236  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:16.169331  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:16.169468  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:16.170068  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:51:16.176304  116758 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W1207 05:51:16.190452  116758 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1207 05:51:16.191031  116758 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1207 05:51:16.193168  116758 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1207 05:51:16.206013  116758 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
E1207 05:51:16.784856  116758 disruption.go:491] Failed to sync pdb concurrent-eviction-requests/test-pdb: Put http://127.0.0.1:45251/apis/policy/v1beta1/namespaces/concurrent-eviction-requests/poddisruptionbudgets/test-pdb/status: dial tcp 127.0.0.1:45251: connect: connection refused
I1207 05:51:17.046771  116758 clientconn.go:551] parsed scheme: ""
I1207 05:51:17.046809  116758 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:51:17.046860  116758 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:51:17.046954  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:17.047553  116758 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:51:17.212073  116758 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I1207 05:51:17.214755  116758 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I1207 05:51:17.214781  116758 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I1207 05:51:17.222607  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1207 05:51:17.225519  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1207 05:51:17.228336  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1207 05:51:17.231019  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I1207 05:51:17.234075  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I1207 05:51:17.236623  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I1207 05:51:17.239480  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1207 05:51:17.242891  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1207 05:51:17.246418  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1207 05:51:17.249286  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1207 05:51:17.252582  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I1207 05:51:17.255632  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1207 05:51:17.258450  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1207 05:51:17.261365  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1207 05:51:17.264700  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1207 05:51:17.267740  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1207 05:51:17.270910  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1207 05:51:17.274091  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 05:51:17.277365  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1207 05:51:17.280130  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1207 05:51:17.291458  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1207 05:51:17.295673  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1207 05:51:17.299266  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 05:51:17.302443  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1207 05:51:17.306104  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1207 05:51:17.314899  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1207 05:51:17.318693  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1207 05:51:17.322313  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 05:51:17.325343  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 05:51:17.328482  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 05:51:17.332295  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 05:51:17.341522  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 05:51:17.345258  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 05:51:17.348993  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 05:51:17.352943  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 05:51:17.356380  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 05:51:17.360484  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 05:51:17.363886  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1207 05:51:17.367326  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1207 05:51:17.370678  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1207 05:51:17.374113  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 05:51:17.378058  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 05:51:17.381823  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 05:51:17.385824  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 05:51:17.388956  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 05:51:17.392103  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1207 05:51:17.396095  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 05:51:17.399913  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1207 05:51:17.404559  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 05:51:17.407987  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 05:51:17.411630  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 05:51:17.451180  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 05:51:17.491883  116758 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 05:51:17.531602  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1207 05:51:17.571209  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
E1207 05:51:17.584893  116758 disruption.go:450] Error syncing PodDisruptionBudget concurrent-eviction-requests/test-pdb, requeuing: Put http://127.0.0.1:45251/apis/policy/v1beta1/namespaces/concurrent-eviction-requests/poddisruptionbudgets/test-pdb/status: dial tcp 127.0.0.1:45251: connect: connection refused
I1207 05:51:17.611134  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1207 05:51:17.651292  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1207 05:51:17.691253  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 05:51:17.731212  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1207 05:51:17.771470  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1207 05:51:17.811275  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 05:51:17.851467  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1207 05:51:17.891218  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1207 05:51:17.931370  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 05:51:17.971253  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 05:51:18.011077  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 05:51:18.051116  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 05:51:18.091110  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 05:51:18.131834  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 05:51:18.171711  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 05:51:18.211092  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 05:51:18.253440  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 05:51:18.292053  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 05:51:18.331352  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1207 05:51:18.371655  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
E1207 05:51:18.384857  116758 disruption.go:491] Failed to sync pdb concurrent-eviction-requests/test-pdb: Put http://127.0.0.1:45251/apis/policy/v1beta1/namespaces/concurrent-eviction-requests/poddisruptionbudgets/test-pdb/status: dial tcp 127.0.0.1:45251: connect: connection refused
I1207 05:51:18.411062  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1207 05:51:18.463315  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 05:51:18.491101  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 05:51:18.531706  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 05:51:18.571028  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 05:51:18.610741  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 05:51:18.651382  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1207 05:51:18.691176  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 05:51:18.731279  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1207 05:51:18.771644  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 05:51:18.811428  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 05:51:18.851201  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 05:51:18.891264  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 05:51:18.931249  116758 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 05:51:18.971218  116758 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1207 05:51:19.011446  116758 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 05:51:19.051079  116758 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 05:51:19.091776  116758 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1207 05:51:19.131337  116758 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 05:51:19.171592  116758 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
E1207 05:51:19.184906  116758 disruption.go:450] Error syncing PodDisruptionBudget concurrent-eviction-requests/test-pdb, requeuing: Put http://127.0.0.1:45251/apis/policy/v1beta1/namespaces/concurrent-eviction-requests/poddisruptionbudgets/test-pdb/status: dial tcp 127.0.0.1:45251: connect: connection refused
I1207 05:51:19.211189  116758 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1207 05:51:19.251416  116758 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 05:51:19.291942  116758 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1207 05:51:19.331319  116758 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 05:51:19.371601  116758 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 05:51:19.412281  116758 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1207 05:51:19.451446  116758 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W1207 05:51:19.512817  116758 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 05:51:19.512877  116758 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 05:51:19.512907  116758 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 05:51:19.512938  116758 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 05:51:19.512960  116758 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1207 05:51:19.512982  116758 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I1207 05:51:19.513937  116758 disruption.go:288] Starting disruption controller
I1207 05:51:19.513962  116758 controller_utils.go:1027] Waiting for caches to sync for disruption controller
I1207 05:51:19.614195  116758 controller_utils.go:1034] Caches are synced for disruption controller
I1207 05:51:19.614240  116758 disruption.go:296] Sending events to api server.
I1207 05:51:21.531545  116758 controller.go:170] Shutting down kubernetes service endpoint reconciler
I1207 05:51:21.531838  116758 disruption.go:305] Shutting down disruption controller
				from junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-054855.xml

Filter through log files | View test history on testgrid


Show 578 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1207 05:34:45.107] process 213 exited with code 0 after 0.1m
I1207 05:34:45.108] Call:  gcloud config get-value account
I1207 05:34:45.470] process 226 exited with code 0 after 0.0m
I1207 05:34:45.471] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 05:34:45.471] Call:  kubectl get -oyaml pods/ade5ea60-f9e1-11e8-92b6-0a580a6c0310
W1207 05:34:47.417] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1207 05:34:47.419] Command failed
I1207 05:34:47.420] process 239 exited with code 1 after 0.0m
E1207 05:34:47.420] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/ade5ea60-f9e1-11e8-92b6-0a580a6c0310']' returned non-zero exit status 1
I1207 05:34:47.420] Root: /workspace
I1207 05:34:47.420] cd to /workspace
I1207 05:34:47.420] Checkout: /workspace/k8s.io/kubernetes master:cd5f41ec1ad45c831df38d986a682e5580eaead7,71684:3c055aa4b47232bf7d6b5d5a0901dae239e33c59 to /workspace/k8s.io/kubernetes
I1207 05:34:47.420] Call:  git init k8s.io/kubernetes
... skipping 833 lines ...
W1207 05:43:50.316] I1207 05:43:50.312529   55439 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
W1207 05:43:50.316] I1207 05:43:50.312583   55439 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W1207 05:43:50.316] I1207 05:43:50.312643   55439 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
W1207 05:43:50.316] I1207 05:43:50.312708   55439 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
W1207 05:43:50.317] I1207 05:43:50.312777   55439 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
W1207 05:43:50.317] I1207 05:43:50.312812   55439 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
W1207 05:43:50.317] E1207 05:43:50.312836   55439 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 05:43:50.317] I1207 05:43:50.312853   55439 controllermanager.go:516] Started "resourcequota"
W1207 05:43:50.318] I1207 05:43:50.312986   55439 resource_quota_controller.go:276] Starting resource quota controller
W1207 05:43:50.318] I1207 05:43:50.313016   55439 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W1207 05:43:50.318] I1207 05:43:50.313037   55439 resource_quota_monitor.go:301] QuotaMonitor running
W1207 05:43:50.319] I1207 05:43:50.313776   55439 controllermanager.go:516] Started "horizontalpodautoscaling"
W1207 05:43:50.319] W1207 05:43:50.313818   55439 controllermanager.go:508] Skipping "csrsigning"
... skipping 46 lines ...
W1207 05:43:50.335] I1207 05:43:50.335069   55439 controller_utils.go:1027] Waiting for caches to sync for deployment controller
W1207 05:43:50.335] I1207 05:43:50.335567   55439 controllermanager.go:516] Started "disruption"
W1207 05:43:50.335] I1207 05:43:50.335593   55439 disruption.go:288] Starting disruption controller
W1207 05:43:50.336] I1207 05:43:50.335604   55439 controller_utils.go:1027] Waiting for caches to sync for disruption controller
W1207 05:43:50.336] I1207 05:43:50.335863   55439 controllermanager.go:516] Started "cronjob"
W1207 05:43:50.336] I1207 05:43:50.336003   55439 cronjob_controller.go:92] Starting CronJob Manager
W1207 05:43:50.336] E1207 05:43:50.336274   55439 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1207 05:43:50.336] W1207 05:43:50.336290   55439 controllermanager.go:508] Skipping "service"
W1207 05:43:50.337] W1207 05:43:50.336610   55439 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W1207 05:43:50.337] I1207 05:43:50.337176   55439 controllermanager.go:516] Started "attachdetach"
W1207 05:43:50.337] I1207 05:43:50.337461   55439 attach_detach_controller.go:315] Starting attach detach controller
W1207 05:43:50.337] I1207 05:43:50.337503   55439 controllermanager.go:516] Started "clusterrole-aggregation"
W1207 05:43:50.338] I1207 05:43:50.337477   55439 controller_utils.go:1027] Waiting for caches to sync for attach detach controller
... skipping 2 lines ...
W1207 05:43:50.338] I1207 05:43:50.337986   55439 controllermanager.go:516] Started "csrapproving"
W1207 05:43:50.338] I1207 05:43:50.338183   55439 controllermanager.go:516] Started "csrcleaner"
W1207 05:43:50.338] W1207 05:43:50.338286   55439 controllermanager.go:495] "bootstrapsigner" is disabled
W1207 05:43:50.339] I1207 05:43:50.338339   55439 cleaner.go:81] Starting CSR cleaner controller
W1207 05:43:50.339] I1207 05:43:50.338196   55439 certificate_controller.go:113] Starting certificate controller
W1207 05:43:50.339] I1207 05:43:50.338493   55439 controller_utils.go:1027] Waiting for caches to sync for certificate controller
W1207 05:43:50.339] W1207 05:43:50.338661   55439 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W1207 05:43:50.339] I1207 05:43:50.339072   55439 controllermanager.go:516] Started "garbagecollector"
W1207 05:43:50.339] W1207 05:43:50.339098   55439 controllermanager.go:495] "tokencleaner" is disabled
W1207 05:43:50.340] I1207 05:43:50.339495   55439 controllermanager.go:516] Started "pvc-protection"
W1207 05:43:50.340] I1207 05:43:50.339869   55439 garbagecollector.go:133] Starting garbage collector controller
W1207 05:43:50.340] I1207 05:43:50.339885   55439 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 05:43:50.340] I1207 05:43:50.340167   55439 pvc_protection_controller.go:99] Starting PVC protection controller
... skipping 15 lines ...
W1207 05:43:50.435] I1207 05:43:50.434570   55439 controller_utils.go:1034] Caches are synced for PV protection controller
W1207 05:43:50.435] I1207 05:43:50.435288   55439 controller_utils.go:1034] Caches are synced for deployment controller
W1207 05:43:50.438] I1207 05:43:50.437808   55439 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W1207 05:43:50.438] I1207 05:43:50.437816   55439 controller_utils.go:1034] Caches are synced for attach detach controller
W1207 05:43:50.439] I1207 05:43:50.438892   55439 controller_utils.go:1034] Caches are synced for certificate controller
W1207 05:43:50.441] I1207 05:43:50.440709   55439 controller_utils.go:1034] Caches are synced for PVC protection controller
W1207 05:43:50.450] E1207 05:43:50.449995   55439 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W1207 05:43:50.454] E1207 05:43:50.453955   55439 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1207 05:43:50.463] E1207 05:43:50.463069   55439 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1207 05:43:50.536] W1207 05:43:50.536130   55439 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1207 05:43:50.617] I1207 05:43:50.616757   55439 controller_utils.go:1034] Caches are synced for taint controller
W1207 05:43:50.617] I1207 05:43:50.616948   55439 taint_manager.go:198] Starting NoExecuteTaintManager
W1207 05:43:50.617] I1207 05:43:50.617068   55439 node_lifecycle_controller.go:1222] Initializing eviction metric for zone: 
W1207 05:43:50.618] I1207 05:43:50.617309   55439 node_lifecycle_controller.go:1072] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W1207 05:43:50.618] I1207 05:43:50.617350   55439 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"12ee939c-f9e3-11e8-a909-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W1207 05:43:50.636] I1207 05:43:50.635867   55439 controller_utils.go:1034] Caches are synced for disruption controller
... skipping 33 lines ...
I1207 05:43:51.607] Successful: --output json has correct server info
I1207 05:43:51.611] +++ [1207 05:43:51] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I1207 05:43:51.758] Successful: --client --output json has correct client info
I1207 05:43:51.766] Successful: --client --output json has no server info
I1207 05:43:51.770] +++ [1207 05:43:51] Testing kubectl version: compare json output using additional --short flag
W1207 05:43:51.871] I1207 05:43:51.798570   55439 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 05:43:51.871] E1207 05:43:51.808335   55439 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 05:43:51.871] I1207 05:43:51.840129   55439 controller_utils.go:1034] Caches are synced for garbage collector controller
W1207 05:43:51.872] I1207 05:43:51.840207   55439 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W1207 05:43:51.899] I1207 05:43:51.899029   55439 controller_utils.go:1034] Caches are synced for garbage collector controller
I1207 05:43:52.000] Successful: --short --output client json info is equal to non short result
I1207 05:43:52.001] Successful: --short --output server json info is equal to non short result
I1207 05:43:52.001] +++ [1207 05:43:51] Testing kubectl version: compare json output with yaml output
... skipping 45 lines ...
I1207 05:43:54.757] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:43:54.760] +++ command: run_RESTMapper_evaluation_tests
I1207 05:43:54.773] +++ [1207 05:43:54] Creating namespace namespace-1544161434-11323
I1207 05:43:54.848] namespace/namespace-1544161434-11323 created
I1207 05:43:54.918] Context "test" modified.
I1207 05:43:54.925] +++ [1207 05:43:54] Testing RESTMapper
I1207 05:43:55.054] +++ [1207 05:43:55] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1207 05:43:55.069] +++ exit code: 0
I1207 05:43:55.209] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1207 05:43:55.209] bindings                                                                      true         Binding
I1207 05:43:55.210] componentstatuses                 cs                                          false        ComponentStatus
I1207 05:43:55.210] configmaps                        cm                                          true         ConfigMap
I1207 05:43:55.210] endpoints                         ep                                          true         Endpoints
... skipping 606 lines ...
I1207 05:44:15.583] poddisruptionbudget.policy/test-pdb-3 created
I1207 05:44:15.682] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1207 05:44:15.757] poddisruptionbudget.policy/test-pdb-4 created
I1207 05:44:15.854] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1207 05:44:16.025] core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:44:16.203] pod/env-test-pod created
W1207 05:44:16.304] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1207 05:44:16.304] error: setting 'all' parameter but found a non empty selector. 
W1207 05:44:16.304] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 05:44:16.305] I1207 05:44:15.236939   52110 controller.go:608] quota admission added evaluator for: poddisruptionbudgets.policy
W1207 05:44:16.305] error: min-available and max-unavailable cannot be both specified
I1207 05:44:16.408] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I1207 05:44:16.408] Name:               env-test-pod
I1207 05:44:16.408] Namespace:          test-kubectl-describe-pod
I1207 05:44:16.408] Priority:           0
I1207 05:44:16.408] PriorityClassName:  <none>
I1207 05:44:16.408] Node:               <none>
... skipping 145 lines ...
I1207 05:44:28.673] replicationcontroller "modified" deleted
W1207 05:44:28.773] I1207 05:44:28.301689   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161463-25083", Name:"modified", UID:"2970c50d-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-5q9bp
I1207 05:44:28.932] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:44:29.080] pod/valid-pod created
I1207 05:44:29.179] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:44:29.340] Successful
I1207 05:44:29.340] message:Error from server: cannot restore map from string
I1207 05:44:29.340] has:cannot restore map from string
I1207 05:44:29.431] Successful
I1207 05:44:29.432] message:pod/valid-pod patched (no change)
I1207 05:44:29.432] has:patched (no change)
I1207 05:44:29.516] pod/valid-pod patched
I1207 05:44:29.614] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I1207 05:44:30.157] pod/valid-pod patched
I1207 05:44:30.251] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1207 05:44:30.332] pod/valid-pod patched
I1207 05:44:30.432] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1207 05:44:30.620] pod/valid-pod patched
I1207 05:44:30.727] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 05:44:30.914] +++ [1207 05:44:30] "kubectl patch with resourceVersion 490" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W1207 05:44:31.014] E1207 05:44:29.332449   52110 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I1207 05:44:31.163] pod "valid-pod" deleted
I1207 05:44:31.177] pod/valid-pod replaced
I1207 05:44:31.276] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1207 05:44:31.425] Successful
I1207 05:44:31.425] message:error: --grace-period must have --force specified
I1207 05:44:31.425] has:\-\-grace-period must have \-\-force specified
I1207 05:44:31.569] Successful
I1207 05:44:31.569] message:error: --timeout must have --force specified
I1207 05:44:31.570] has:\-\-timeout must have \-\-force specified
W1207 05:44:31.713] W1207 05:44:31.713301   55439 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1207 05:44:31.814] node/node-v1-test created
I1207 05:44:31.865] node/node-v1-test replaced
I1207 05:44:31.960] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1207 05:44:32.041] node "node-v1-test" deleted
I1207 05:44:32.146] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 05:44:32.437] core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 27 lines ...
I1207 05:44:34.766] pod/redis-master created
I1207 05:44:34.771] pod/valid-pod created
W1207 05:44:34.871] Edit cancelled, no changes made.
W1207 05:44:34.871] Edit cancelled, no changes made.
W1207 05:44:34.871] Edit cancelled, no changes made.
W1207 05:44:34.872] Edit cancelled, no changes made.
W1207 05:44:34.872] error: 'name' already has a value (valid-pod), and --overwrite is false
W1207 05:44:34.872] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 05:44:34.972] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I1207 05:44:34.977] core.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I1207 05:44:35.058] pod "redis-master" deleted
I1207 05:44:35.066] pod "valid-pod" deleted
I1207 05:44:35.172] core.sh:622: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 72 lines ...
I1207 05:44:41.282] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1207 05:44:41.284] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:44:41.287] +++ command: run_kubectl_create_error_tests
I1207 05:44:41.302] +++ [1207 05:44:41] Creating namespace namespace-1544161481-1990
I1207 05:44:41.375] namespace/namespace-1544161481-1990 created
I1207 05:44:41.446] Context "test" modified.
I1207 05:44:41.453] +++ [1207 05:44:41] Testing kubectl create with error
W1207 05:44:41.553] Error: required flag(s) "filename" not set
W1207 05:44:41.554] 
W1207 05:44:41.554] 
W1207 05:44:41.554] Examples:
W1207 05:44:41.554]   # Create a pod using the data in pod.json.
W1207 05:44:41.554]   kubectl create -f ./pod.json
W1207 05:44:41.554]   
... skipping 38 lines ...
W1207 05:44:41.558]   kubectl create -f FILENAME [options]
W1207 05:44:41.558] 
W1207 05:44:41.558] Use "kubectl <command> --help" for more information about a given command.
W1207 05:44:41.559] Use "kubectl options" for a list of global command-line options (applies to all commands).
W1207 05:44:41.559] 
W1207 05:44:41.559] required flag(s) "filename" not set
I1207 05:44:41.684] +++ [1207 05:44:41] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1207 05:44:41.785] kubectl convert is DEPRECATED and will be removed in a future version.
W1207 05:44:41.785] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 05:44:41.885] +++ exit code: 0
I1207 05:44:41.902] Recording: run_kubectl_apply_tests
I1207 05:44:41.903] Running command: run_kubectl_apply_tests
I1207 05:44:41.925] 
... skipping 17 lines ...
I1207 05:44:43.101] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I1207 05:44:43.986] deployment.extensions "test-deployment-retainkeys" deleted
I1207 05:44:44.090] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:44:44.255] pod/selector-test-pod created
I1207 05:44:44.358] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 05:44:44.450] Successful
I1207 05:44:44.450] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 05:44:44.450] has:pods "selector-test-pod-dont-apply" not found
I1207 05:44:44.534] pod "selector-test-pod" deleted
I1207 05:44:44.636] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:44:44.869] pod/test-pod created (server dry run)
I1207 05:44:44.971] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:44:45.129] pod/test-pod created
... skipping 8 lines ...
W1207 05:44:45.901] I1207 05:44:45.900934   52110 clientconn.go:551] parsed scheme: ""
W1207 05:44:45.902] I1207 05:44:45.900976   52110 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 05:44:45.902] I1207 05:44:45.901026   52110 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 05:44:45.902] I1207 05:44:45.901065   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:44:45.902] I1207 05:44:45.901662   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:44:45.979] I1207 05:44:45.978609   52110 controller.go:608] quota admission added evaluator for: resources.mygroup.example.com
W1207 05:44:46.072] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1207 05:44:46.172] kind.mygroup.example.com/myobj created (server dry run)
I1207 05:44:46.172] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1207 05:44:46.270] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:44:46.436] pod/a created
I1207 05:44:47.942] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I1207 05:44:48.032] Successful
I1207 05:44:48.032] message:Error from server (NotFound): pods "b" not found
I1207 05:44:48.032] has:pods "b" not found
I1207 05:44:48.202] pod/b created
I1207 05:44:48.219] pod/a pruned
I1207 05:44:49.910] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I1207 05:44:50.001] Successful
I1207 05:44:50.002] message:Error from server (NotFound): pods "a" not found
I1207 05:44:50.002] has:pods "a" not found
I1207 05:44:50.087] pod "b" deleted
I1207 05:44:50.185] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:44:50.337] pod/a created
I1207 05:44:50.436] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I1207 05:44:50.523] Successful
I1207 05:44:50.523] message:Error from server (NotFound): pods "b" not found
I1207 05:44:50.523] has:pods "b" not found
I1207 05:44:50.677] pod/b created
I1207 05:44:50.776] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I1207 05:44:50.867] apply.sh:166: Successful get pods b {{.metadata.name}}: b
I1207 05:44:50.950] pod "a" deleted
I1207 05:44:50.957] pod "b" deleted
I1207 05:44:51.118] Successful
I1207 05:44:51.119] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I1207 05:44:51.119] has:all resources selected for prune without explicitly passing --all
I1207 05:44:51.276] pod/a created
I1207 05:44:51.285] pod/b created
I1207 05:44:51.296] service/prune-svc created
I1207 05:44:52.798] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I1207 05:44:52.891] apply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 133 lines ...
I1207 05:45:06.170] Context "test" modified.
I1207 05:45:06.177] +++ [1207 05:45:06] Testing kubectl create filter
I1207 05:45:06.278] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:06.442] pod/selector-test-pod created
I1207 05:45:06.544] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 05:45:06.636] Successful
I1207 05:45:06.637] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 05:45:06.637] has:pods "selector-test-pod-dont-apply" not found
I1207 05:45:06.723] pod "selector-test-pod" deleted
I1207 05:45:06.745] +++ exit code: 0
I1207 05:45:06.784] Recording: run_kubectl_apply_deployments_tests
I1207 05:45:06.784] Running command: run_kubectl_apply_deployments_tests
I1207 05:45:06.807] 
... skipping 38 lines ...
I1207 05:45:08.862] apps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:08.957] apps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:09.054] apps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:09.222] deployment.extensions/nginx created
I1207 05:45:09.328] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I1207 05:45:13.539] Successful
I1207 05:45:13.539] message:Error from server (Conflict): error when applying patch:
I1207 05:45:13.539] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544161506-21599\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1207 05:45:13.540] to:
I1207 05:45:13.540] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I1207 05:45:13.540] Name: "nginx", Namespace: "namespace-1544161506-21599"
I1207 05:45:13.541] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1544161506-21599/deployments/nginx" "uid":"41d55c5b-f9e3-11e8-a909-0242ac110002" "creationTimestamp":"2018-12-07T05:45:09Z" "name":"nginx" "resourceVersion":"709" "generation":'\x01' "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544161506-21599\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "namespace":"namespace-1544161506-21599"] "spec":map["replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["labels":map["name":"nginx1"] "creationTimestamp":<nil>] "spec":map["dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e']] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647)] "status":map["updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["lastTransitionTime":"2018-12-07T05:45:09Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2018-12-07T05:45:09Z"]] "observedGeneration":'\x01' "replicas":'\x03']]}
I1207 05:45:13.541] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I1207 05:45:13.541] has:Error from server (Conflict)
W1207 05:45:13.642] I1207 05:45:09.226604   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161506-21599", Name:"nginx", UID:"41d55c5b-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W1207 05:45:13.642] I1207 05:45:09.230337   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161506-21599", Name:"nginx-5d56d6b95f", UID:"41d603ce-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-4gzr8
W1207 05:45:13.643] I1207 05:45:09.233414   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161506-21599", Name:"nginx-5d56d6b95f", UID:"41d603ce-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-68pc8
W1207 05:45:13.643] I1207 05:45:09.233615   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161506-21599", Name:"nginx-5d56d6b95f", UID:"41d603ce-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-7c77p
I1207 05:45:18.744] deployment.extensions/nginx configured
W1207 05:45:18.844] I1207 05:45:18.747475   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161506-21599", Name:"nginx", UID:"47823e67-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
... skipping 86 lines ...
I1207 05:45:25.336] +++ [1207 05:45:25] Creating namespace namespace-1544161525-23008
I1207 05:45:25.407] namespace/namespace-1544161525-23008 created
I1207 05:45:25.477] Context "test" modified.
I1207 05:45:25.484] +++ [1207 05:45:25] Testing kubectl get
I1207 05:45:25.577] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:25.668] Successful
I1207 05:45:25.669] message:Error from server (NotFound): pods "abc" not found
I1207 05:45:25.669] has:pods "abc" not found
I1207 05:45:25.761] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:25.855] Successful
I1207 05:45:25.856] message:Error from server (NotFound): pods "abc" not found
I1207 05:45:25.856] has:pods "abc" not found
I1207 05:45:25.950] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:26.037] Successful
I1207 05:45:26.037] message:{
I1207 05:45:26.037]     "apiVersion": "v1",
I1207 05:45:26.037]     "items": [],
... skipping 23 lines ...
I1207 05:45:26.387] has not:No resources found
I1207 05:45:26.475] Successful
I1207 05:45:26.476] message:NAME
I1207 05:45:26.476] has not:No resources found
I1207 05:45:26.572] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:26.689] Successful
I1207 05:45:26.689] message:error: the server doesn't have a resource type "foobar"
I1207 05:45:26.690] has not:No resources found
I1207 05:45:26.772] Successful
I1207 05:45:26.773] message:No resources found.
I1207 05:45:26.773] has:No resources found
I1207 05:45:26.857] Successful
I1207 05:45:26.858] message:
I1207 05:45:26.858] has not:No resources found
I1207 05:45:26.941] Successful
I1207 05:45:26.941] message:No resources found.
I1207 05:45:26.941] has:No resources found
I1207 05:45:27.034] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:27.122] Successful
I1207 05:45:27.122] message:Error from server (NotFound): pods "abc" not found
I1207 05:45:27.122] has:pods "abc" not found
I1207 05:45:27.124] FAIL!
I1207 05:45:27.124] message:Error from server (NotFound): pods "abc" not found
I1207 05:45:27.124] has not:List
I1207 05:45:27.124] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1207 05:45:27.254] Successful
I1207 05:45:27.254] message:I1207 05:45:27.191068   67493 loader.go:359] Config loaded from file /tmp/tmp.S1qPyPetDf/.kube/config
I1207 05:45:27.255] I1207 05:45:27.191613   67493 loader.go:359] Config loaded from file /tmp/tmp.S1qPyPetDf/.kube/config
I1207 05:45:27.255] I1207 05:45:27.193068   67493 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I1207 05:45:30.804] }
I1207 05:45:30.899] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:45:31.154] <no value>Successful
I1207 05:45:31.154] message:valid-pod:
I1207 05:45:31.154] has:valid-pod:
I1207 05:45:31.241] Successful
I1207 05:45:31.242] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1207 05:45:31.242] 	template was:
I1207 05:45:31.242] 		{.missing}
I1207 05:45:31.242] 	object given to jsonpath engine was:
I1207 05:45:31.243] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"valid-pod", "namespace":"namespace-1544161530-4766", "selfLink":"/api/v1/namespaces/namespace-1544161530-4766/pods/valid-pod", "uid":"4ea4e511-f9e3-11e8-a909-0242ac110002", "resourceVersion":"800", "creationTimestamp":"2018-12-07T05:45:30Z", "labels":map[string]interface {}{"name":"valid-pod"}}, "spec":map[string]interface {}{"restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"memory":"512Mi", "cpu":"1"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}}, "status":map[string]interface {}{"qosClass":"Guaranteed", "phase":"Pending"}}
I1207 05:45:31.243] has:missing is not found
I1207 05:45:31.329] Successful
I1207 05:45:31.330] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1207 05:45:31.330] 	template was:
I1207 05:45:31.330] 		{{.missing}}
I1207 05:45:31.330] 	raw data was:
I1207 05:45:31.330] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2018-12-07T05:45:30Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1544161530-4766","resourceVersion":"800","selfLink":"/api/v1/namespaces/namespace-1544161530-4766/pods/valid-pod","uid":"4ea4e511-f9e3-11e8-a909-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1207 05:45:31.331] 	object given to template engine was:
I1207 05:45:31.331] 		map[apiVersion:v1 kind:Pod metadata:map[labels:map[name:valid-pod] name:valid-pod namespace:namespace-1544161530-4766 resourceVersion:800 selfLink:/api/v1/namespaces/namespace-1544161530-4766/pods/valid-pod uid:4ea4e511-f9e3-11e8-a909-0242ac110002 creationTimestamp:2018-12-07T05:45:30Z] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I1207 05:45:31.331] has:map has no entry for key "missing"
W1207 05:45:31.432] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W1207 05:45:32.417] E1207 05:45:32.417209   67878 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I1207 05:45:32.518] Successful
I1207 05:45:32.519] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 05:45:32.519] valid-pod   0/1     Pending   0          1s
I1207 05:45:32.519] has:STATUS
I1207 05:45:32.519] Successful
... skipping 80 lines ...
I1207 05:45:34.709]   terminationGracePeriodSeconds: 30
I1207 05:45:34.709] status:
I1207 05:45:34.709]   phase: Pending
I1207 05:45:34.709]   qosClass: Guaranteed
I1207 05:45:34.709] has:name: valid-pod
I1207 05:45:34.709] Successful
I1207 05:45:34.709] message:Error from server (NotFound): pods "invalid-pod" not found
I1207 05:45:34.709] has:"invalid-pod" not found
I1207 05:45:34.788] pod "valid-pod" deleted
I1207 05:45:34.888] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:45:35.047] pod/redis-master created
I1207 05:45:35.052] pod/valid-pod created
I1207 05:45:35.149] Successful
... skipping 312 lines ...
I1207 05:45:39.557] Running command: run_create_secret_tests
I1207 05:45:39.578] 
I1207 05:45:39.581] +++ Running case: test-cmd.run_create_secret_tests 
I1207 05:45:39.583] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:45:39.585] +++ command: run_create_secret_tests
I1207 05:45:39.682] Successful
I1207 05:45:39.682] message:Error from server (NotFound): secrets "mysecret" not found
I1207 05:45:39.682] has:secrets "mysecret" not found
I1207 05:45:39.848] Successful
I1207 05:45:39.849] message:Error from server (NotFound): secrets "mysecret" not found
I1207 05:45:39.849] has:secrets "mysecret" not found
I1207 05:45:39.850] Successful
I1207 05:45:39.851] message:user-specified
I1207 05:45:39.851] has:user-specified
I1207 05:45:39.927] Successful
I1207 05:45:40.003] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"542dd94d-f9e3-11e8-a909-0242ac110002","resourceVersion":"874","creationTimestamp":"2018-12-07T05:45:39Z"}}
... skipping 80 lines ...
I1207 05:45:41.972] has:Timeout exceeded while reading body
I1207 05:45:42.056] Successful
I1207 05:45:42.056] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 05:45:42.056] valid-pod   0/1     Pending   0          2s
I1207 05:45:42.056] has:valid-pod
I1207 05:45:42.127] Successful
I1207 05:45:42.128] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1207 05:45:42.128] has:Invalid timeout value
I1207 05:45:42.215] pod "valid-pod" deleted
I1207 05:45:42.238] +++ exit code: 0
I1207 05:45:42.278] Recording: run_crd_tests
I1207 05:45:42.279] Running command: run_crd_tests
I1207 05:45:42.300] 
... skipping 166 lines ...
I1207 05:45:46.768] foo.company.com/test patched
I1207 05:45:46.867] crd.sh:237: Successful get foos/test {{.patched}}: value1
I1207 05:45:46.953] foo.company.com/test patched
I1207 05:45:47.050] crd.sh:239: Successful get foos/test {{.patched}}: value2
I1207 05:45:47.135] foo.company.com/test patched
I1207 05:45:47.232] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I1207 05:45:47.396] +++ [1207 05:45:47] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1207 05:45:47.464] {
I1207 05:45:47.464]     "apiVersion": "company.com/v1",
I1207 05:45:47.464]     "kind": "Foo",
I1207 05:45:47.464]     "metadata": {
I1207 05:45:47.464]         "annotations": {
I1207 05:45:47.465]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 114 lines ...
I1207 05:45:50.047] has:bar.company.com/test
I1207 05:45:50.129] bar.company.com "test" deleted
W1207 05:45:50.229] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 70359 Killed                  while [ ${tries} -lt 10 ]; do
W1207 05:45:50.230]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W1207 05:45:50.230] done
W1207 05:45:50.230] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 70358 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W1207 05:45:51.935] E1207 05:45:51.934349   55439 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources"]
W1207 05:45:52.125] I1207 05:45:52.124887   55439 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 05:45:52.126] I1207 05:45:52.126435   52110 clientconn.go:551] parsed scheme: ""
W1207 05:45:52.127] I1207 05:45:52.126464   52110 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 05:45:52.127] I1207 05:45:52.126517   52110 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 05:45:52.127] I1207 05:45:52.126567   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:45:52.127] I1207 05:45:52.126981   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 63 lines ...
I1207 05:45:58.532] bar.company.com/test created
I1207 05:45:58.636] crd.sh:456: Successful get bars {{len .items}}: 1
I1207 05:45:58.716] namespace "non-native-resources" deleted
I1207 05:46:03.997] crd.sh:459: Successful get bars {{len .items}}: 0
I1207 05:46:04.164] customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I1207 05:46:04.263] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
W1207 05:46:04.364] Error from server (NotFound): namespaces "non-native-resources" not found
I1207 05:46:04.464] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1207 05:46:04.475] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I1207 05:46:04.508] +++ exit code: 0
I1207 05:46:04.546] Recording: run_cmd_with_img_tests
I1207 05:46:04.546] Running command: run_cmd_with_img_tests
I1207 05:46:04.567] 
... skipping 6 lines ...
I1207 05:46:04.736] +++ [1207 05:46:04] Testing cmd with image
I1207 05:46:04.830] Successful
I1207 05:46:04.830] message:deployment.apps/test1 created
I1207 05:46:04.830] has:deployment.apps/test1 created
I1207 05:46:04.910] deployment.extensions "test1" deleted
I1207 05:46:04.993] Successful
I1207 05:46:04.993] message:error: Invalid image name "InvalidImageName": invalid reference format
I1207 05:46:04.993] has:error: Invalid image name "InvalidImageName": invalid reference format
I1207 05:46:05.009] +++ exit code: 0
I1207 05:46:05.077] Recording: run_recursive_resources_tests
I1207 05:46:05.078] Running command: run_recursive_resources_tests
I1207 05:46:05.100] 
I1207 05:46:05.103] +++ Running case: test-cmd.run_recursive_resources_tests 
I1207 05:46:05.106] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I1207 05:46:05.270] Context "test" modified.
I1207 05:46:05.373] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:46:05.651] generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:05.653] Successful
I1207 05:46:05.653] message:pod/busybox0 created
I1207 05:46:05.653] pod/busybox1 created
I1207 05:46:05.654] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 05:46:05.654] has:error validating data: kind not set
I1207 05:46:05.748] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:05.932] generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1207 05:46:05.934] Successful
I1207 05:46:05.935] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:46:05.935] has:Object 'Kind' is missing
I1207 05:46:06.032] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:06.307] generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 05:46:06.310] Successful
I1207 05:46:06.310] message:pod/busybox0 replaced
I1207 05:46:06.310] pod/busybox1 replaced
I1207 05:46:06.310] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 05:46:06.311] has:error validating data: kind not set
I1207 05:46:06.406] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:06.508] Successful
I1207 05:46:06.509] message:Name:               busybox0
I1207 05:46:06.509] Namespace:          namespace-1544161565-4902
I1207 05:46:06.509] Priority:           0
I1207 05:46:06.509] PriorityClassName:  <none>
... skipping 159 lines ...
I1207 05:46:06.526] has:Object 'Kind' is missing
I1207 05:46:06.610] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:06.807] generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1207 05:46:06.810] Successful
I1207 05:46:06.810] message:pod/busybox0 annotated
I1207 05:46:06.810] pod/busybox1 annotated
I1207 05:46:06.811] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:46:06.811] has:Object 'Kind' is missing
I1207 05:46:06.908] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:07.189] generic-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 05:46:07.192] Successful
I1207 05:46:07.192] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 05:46:07.192] pod/busybox0 configured
I1207 05:46:07.192] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 05:46:07.193] pod/busybox1 configured
I1207 05:46:07.193] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 05:46:07.193] has:error validating data: kind not set
I1207 05:46:07.286] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:46:07.441] deployment.extensions/nginx created
I1207 05:46:07.545] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1207 05:46:07.647] generic-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:46:07.823] generic-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I1207 05:46:07.825] Successful
... skipping 51 lines ...
W1207 05:46:08.012] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 05:46:08.113] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:08.195] generic-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:08.198] Successful
I1207 05:46:08.198] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1207 05:46:08.199] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 05:46:08.199] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:46:08.199] has:Object 'Kind' is missing
I1207 05:46:08.298] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:08.391] Successful
I1207 05:46:08.391] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:46:08.392] has:busybox0:busybox1:
I1207 05:46:08.393] Successful
I1207 05:46:08.393] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:46:08.393] has:Object 'Kind' is missing
I1207 05:46:08.493] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:08.593] pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:46:08.689] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1207 05:46:08.692] Successful
I1207 05:46:08.692] message:pod/busybox0 labeled
I1207 05:46:08.692] pod/busybox1 labeled
I1207 05:46:08.692] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:46:08.693] has:Object 'Kind' is missing
I1207 05:46:08.789] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:08.904] pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W1207 05:46:09.005] I1207 05:46:08.892500   55439 namespace_controller.go:171] Namespace has been deleted non-native-resources
I1207 05:46:09.106] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1207 05:46:09.107] Successful
I1207 05:46:09.107] message:pod/busybox0 patched
I1207 05:46:09.107] pod/busybox1 patched
I1207 05:46:09.107] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:46:09.107] has:Object 'Kind' is missing
I1207 05:46:09.190] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:09.448] generic-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:46:09.451] Successful
I1207 05:46:09.452] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 05:46:09.452] pod "busybox0" force deleted
I1207 05:46:09.452] pod "busybox1" force deleted
I1207 05:46:09.453] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:46:09.453] has:Object 'Kind' is missing
I1207 05:46:09.582] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:46:09.798] replicationcontroller/busybox0 created
I1207 05:46:09.805] replicationcontroller/busybox1 created
W1207 05:46:09.906] I1207 05:46:09.803275   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161565-4902", Name:"busybox0", UID:"65f05aca-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1039", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-v4zfc
W1207 05:46:09.907] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 05:46:09.907] I1207 05:46:09.810210   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161565-4902", Name:"busybox1", UID:"65f1860c-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1041", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xr46g
I1207 05:46:10.007] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:10.082] generic-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:10.210] generic-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 05:46:10.330] generic-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 05:46:10.595] generic-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 05:46:10.722] generic-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 05:46:10.725] Successful
I1207 05:46:10.725] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1207 05:46:10.726] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1207 05:46:10.726] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:10.726] has:Object 'Kind' is missing
I1207 05:46:10.840] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1207 05:46:10.967] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1207 05:46:11.109] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:11.240] generic-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 05:46:11.367] generic-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 05:46:11.628] generic-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 05:46:11.754] generic-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 05:46:11.757] Successful
I1207 05:46:11.757] message:service/busybox0 exposed
I1207 05:46:11.757] service/busybox1 exposed
I1207 05:46:11.758] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:11.758] has:Object 'Kind' is missing
I1207 05:46:11.894] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:12.024] generic-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 05:46:12.157] generic-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 05:46:12.450] generic-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I1207 05:46:12.579] generic-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I1207 05:46:12.583] Successful
I1207 05:46:12.583] message:replicationcontroller/busybox0 scaled
I1207 05:46:12.583] replicationcontroller/busybox1 scaled
I1207 05:46:12.584] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:12.584] has:Object 'Kind' is missing
W1207 05:46:12.685] I1207 05:46:12.300492   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161565-4902", Name:"busybox0", UID:"65f05aca-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1060", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-jzs6f
W1207 05:46:12.686] I1207 05:46:12.314110   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161565-4902", Name:"busybox1", UID:"65f1860c-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-bxwht
I1207 05:46:12.786] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:12.971] generic-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:46:12.975] Successful
I1207 05:46:12.975] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 05:46:12.976] replicationcontroller "busybox0" force deleted
I1207 05:46:12.976] replicationcontroller "busybox1" force deleted
I1207 05:46:12.976] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:12.977] has:Object 'Kind' is missing
I1207 05:46:13.105] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:46:13.319] deployment.extensions/nginx1-deployment created
I1207 05:46:13.325] deployment.extensions/nginx0-deployment created
W1207 05:46:13.425] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 05:46:13.426] I1207 05:46:13.324555   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161565-4902", Name:"nginx1-deployment", UID:"68098a0c-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1081", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W1207 05:46:13.427] I1207 05:46:13.330002   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161565-4902", Name:"nginx1-deployment-75f6fc6747", UID:"680a623c-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1082", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-r8d6g
W1207 05:46:13.427] I1207 05:46:13.330788   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161565-4902", Name:"nginx0-deployment", UID:"680a8a5a-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1083", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W1207 05:46:13.428] I1207 05:46:13.332615   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161565-4902", Name:"nginx1-deployment-75f6fc6747", UID:"680a623c-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1082", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-wxk78
W1207 05:46:13.428] I1207 05:46:13.339262   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161565-4902", Name:"nginx0-deployment-b6bb4ccbb", UID:"680b793c-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-pscwx
W1207 05:46:13.428] I1207 05:46:13.345030   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161565-4902", Name:"nginx0-deployment-b6bb4ccbb", UID:"680b793c-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-4n6xp
I1207 05:46:13.529] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1207 05:46:13.602] generic-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 05:46:13.896] generic-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 05:46:13.899] Successful
I1207 05:46:13.900] message:deployment.extensions/nginx1-deployment skipped rollback (current template already matches revision 1)
I1207 05:46:13.900] deployment.extensions/nginx0-deployment skipped rollback (current template already matches revision 1)
I1207 05:46:13.900] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:46:13.901] has:Object 'Kind' is missing
I1207 05:46:14.033] deployment.extensions/nginx1-deployment paused
I1207 05:46:14.039] deployment.extensions/nginx0-deployment paused
I1207 05:46:14.185] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1207 05:46:14.188] Successful
I1207 05:46:14.189] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I1207 05:46:14.528] 1         <none>
I1207 05:46:14.528] 
I1207 05:46:14.528] deployment.extensions/nginx0-deployment 
I1207 05:46:14.528] REVISION  CHANGE-CAUSE
I1207 05:46:14.528] 1         <none>
I1207 05:46:14.528] 
I1207 05:46:14.529] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:46:14.529] has:nginx0-deployment
I1207 05:46:14.530] Successful
I1207 05:46:14.530] message:deployment.extensions/nginx1-deployment 
I1207 05:46:14.530] REVISION  CHANGE-CAUSE
I1207 05:46:14.530] 1         <none>
I1207 05:46:14.530] 
I1207 05:46:14.530] deployment.extensions/nginx0-deployment 
I1207 05:46:14.530] REVISION  CHANGE-CAUSE
I1207 05:46:14.531] 1         <none>
I1207 05:46:14.531] 
I1207 05:46:14.531] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:46:14.531] has:nginx1-deployment
I1207 05:46:14.532] Successful
I1207 05:46:14.532] message:deployment.extensions/nginx1-deployment 
I1207 05:46:14.532] REVISION  CHANGE-CAUSE
I1207 05:46:14.533] 1         <none>
I1207 05:46:14.533] 
I1207 05:46:14.533] deployment.extensions/nginx0-deployment 
I1207 05:46:14.533] REVISION  CHANGE-CAUSE
I1207 05:46:14.533] 1         <none>
I1207 05:46:14.533] 
I1207 05:46:14.533] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:46:14.533] has:Object 'Kind' is missing
I1207 05:46:14.615] deployment.extensions "nginx1-deployment" force deleted
I1207 05:46:14.623] deployment.extensions "nginx0-deployment" force deleted
W1207 05:46:14.723] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 05:46:14.724] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:46:15.726] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:46:15.876] replicationcontroller/busybox0 created
I1207 05:46:15.881] replicationcontroller/busybox1 created
I1207 05:46:15.987] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:46:16.089] Successful
I1207 05:46:16.089] message:no rollbacker has been implemented for "ReplicationController"
... skipping 3 lines ...
I1207 05:46:16.091] Successful
I1207 05:46:16.091] message:no rollbacker has been implemented for "ReplicationController"
I1207 05:46:16.091] no rollbacker has been implemented for "ReplicationController"
I1207 05:46:16.091] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:16.092] has:Object 'Kind' is missing
W1207 05:46:16.192] I1207 05:46:15.880040   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161565-4902", Name:"busybox0", UID:"698fd610-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1126", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-bn5vb
W1207 05:46:16.193] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 05:46:16.193] I1207 05:46:15.884007   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161565-4902", Name:"busybox1", UID:"6990bec5-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1128", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-dfg97
I1207 05:46:16.293] Successful
I1207 05:46:16.294] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:16.294] error: replicationcontrollers "busybox0" pausing is not supported
I1207 05:46:16.294] error: replicationcontrollers "busybox1" pausing is not supported
I1207 05:46:16.295] has:Object 'Kind' is missing
I1207 05:46:16.295] Successful
I1207 05:46:16.295] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:16.295] error: replicationcontrollers "busybox0" pausing is not supported
I1207 05:46:16.295] error: replicationcontrollers "busybox1" pausing is not supported
I1207 05:46:16.295] has:replicationcontrollers "busybox0" pausing is not supported
I1207 05:46:16.296] Successful
I1207 05:46:16.296] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:16.296] error: replicationcontrollers "busybox0" pausing is not supported
I1207 05:46:16.296] error: replicationcontrollers "busybox1" pausing is not supported
I1207 05:46:16.296] has:replicationcontrollers "busybox1" pausing is not supported
I1207 05:46:16.296] Successful
I1207 05:46:16.297] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:16.297] error: replicationcontrollers "busybox0" resuming is not supported
I1207 05:46:16.297] error: replicationcontrollers "busybox1" resuming is not supported
I1207 05:46:16.297] has:Object 'Kind' is missing
I1207 05:46:16.299] Successful
I1207 05:46:16.300] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:16.300] error: replicationcontrollers "busybox0" resuming is not supported
I1207 05:46:16.300] error: replicationcontrollers "busybox1" resuming is not supported
I1207 05:46:16.300] has:replicationcontrollers "busybox0" resuming is not supported
I1207 05:46:16.302] Successful
I1207 05:46:16.302] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:16.302] error: replicationcontrollers "busybox0" resuming is not supported
I1207 05:46:16.303] error: replicationcontrollers "busybox1" resuming is not supported
I1207 05:46:16.303] has:replicationcontrollers "busybox0" resuming is not supported
I1207 05:46:16.385] replicationcontroller "busybox0" force deleted
I1207 05:46:16.391] replicationcontroller "busybox1" force deleted
W1207 05:46:16.492] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 05:46:16.492] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:46:17.415] +++ exit code: 0
I1207 05:46:17.471] Recording: run_namespace_tests
I1207 05:46:17.471] Running command: run_namespace_tests
I1207 05:46:17.491] 
I1207 05:46:17.494] +++ Running case: test-cmd.run_namespace_tests 
I1207 05:46:17.496] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:46:17.498] +++ command: run_namespace_tests
I1207 05:46:17.508] +++ [1207 05:46:17] Testing kubectl(v1:namespaces)
I1207 05:46:17.581] namespace/my-namespace created
I1207 05:46:17.678] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1207 05:46:17.762] namespace "my-namespace" deleted
W1207 05:46:21.945] E1207 05:46:21.944064   55439 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 05:46:22.279] I1207 05:46:22.278099   55439 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 05:46:22.380] I1207 05:46:22.378857   55439 controller_utils.go:1034] Caches are synced for garbage collector controller
I1207 05:46:23.037] namespace/my-namespace condition met
I1207 05:46:23.188] Successful
I1207 05:46:23.189] message:Error from server (NotFound): namespaces "my-namespace" not found
I1207 05:46:23.189] has: not found
I1207 05:46:23.413] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I1207 05:46:23.536] namespace/other created
I1207 05:46:23.700] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I1207 05:46:23.866] core.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:46:24.169] pod/valid-pod created
I1207 05:46:24.341] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:46:24.521] core.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:46:24.668] Successful
I1207 05:46:24.669] message:error: a resource cannot be retrieved by name across all namespaces
I1207 05:46:24.669] has:a resource cannot be retrieved by name across all namespaces
I1207 05:46:24.841] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:46:24.994] pod "valid-pod" force deleted
W1207 05:46:25.095] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 05:46:25.196] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:46:25.318] namespace "other" deleted
... skipping 117 lines ...
I1207 05:46:46.221] +++ command: run_client_config_tests
I1207 05:46:46.233] +++ [1207 05:46:46] Creating namespace namespace-1544161606-28728
I1207 05:46:46.309] namespace/namespace-1544161606-28728 created
I1207 05:46:46.380] Context "test" modified.
I1207 05:46:46.387] +++ [1207 05:46:46] Testing client config
I1207 05:46:46.459] Successful
I1207 05:46:46.459] message:error: stat missing: no such file or directory
I1207 05:46:46.459] has:missing: no such file or directory
I1207 05:46:46.528] Successful
I1207 05:46:46.528] message:error: stat missing: no such file or directory
I1207 05:46:46.528] has:missing: no such file or directory
I1207 05:46:46.599] Successful
I1207 05:46:46.599] message:error: stat missing: no such file or directory
I1207 05:46:46.599] has:missing: no such file or directory
I1207 05:46:46.674] Successful
I1207 05:46:46.674] message:Error in configuration: context was not found for specified context: missing-context
I1207 05:46:46.674] has:context was not found for specified context: missing-context
I1207 05:46:46.748] Successful
I1207 05:46:46.748] message:error: no server found for cluster "missing-cluster"
I1207 05:46:46.748] has:no server found for cluster "missing-cluster"
I1207 05:46:46.824] Successful
I1207 05:46:46.824] message:error: auth info "missing-user" does not exist
I1207 05:46:46.824] has:auth info "missing-user" does not exist
I1207 05:46:46.965] Successful
I1207 05:46:46.965] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1207 05:46:46.965] has:Error loading config file
I1207 05:46:47.039] Successful
I1207 05:46:47.039] message:error: stat missing-config: no such file or directory
I1207 05:46:47.039] has:no such file or directory
I1207 05:46:47.055] +++ exit code: 0
I1207 05:46:47.093] Recording: run_service_accounts_tests
I1207 05:46:47.094] Running command: run_service_accounts_tests
I1207 05:46:47.116] 
I1207 05:46:47.118] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 76 lines ...
I1207 05:46:54.574]                 job-name=test-job
I1207 05:46:54.574]                 run=pi
I1207 05:46:54.575] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1207 05:46:54.575] Parallelism:    1
I1207 05:46:54.575] Completions:    1
I1207 05:46:54.575] Start Time:     Fri, 07 Dec 2018 05:46:54 +0000
I1207 05:46:54.575] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1207 05:46:54.575] Pod Template:
I1207 05:46:54.575]   Labels:  controller-uid=80759686-f9e3-11e8-a909-0242ac110002
I1207 05:46:54.575]            job-name=test-job
I1207 05:46:54.575]            run=pi
I1207 05:46:54.575]   Containers:
I1207 05:46:54.575]    pi:
... skipping 329 lines ...
I1207 05:47:04.485]   selector:
I1207 05:47:04.485]     role: padawan
I1207 05:47:04.485]   sessionAffinity: None
I1207 05:47:04.486]   type: ClusterIP
I1207 05:47:04.486] status:
I1207 05:47:04.486]   loadBalancer: {}
W1207 05:47:04.586] error: you must specify resources by --filename when --local is set.
W1207 05:47:04.587] Example resource specifications include:
W1207 05:47:04.587]    '-f rsrc.yaml'
W1207 05:47:04.587]    '--filename=rsrc.json'
I1207 05:47:04.687] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1207 05:47:04.842] core.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1207 05:47:04.931] service "redis-master" deleted
... skipping 93 lines ...
I1207 05:47:11.110] apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:47:11.210] apps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 05:47:11.322] daemonset.extensions/bind rolled back
I1207 05:47:11.423] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 05:47:11.520] apps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 05:47:11.629] Successful
I1207 05:47:11.629] message:error: unable to find specified revision 1000000 in history
I1207 05:47:11.630] has:unable to find specified revision
I1207 05:47:11.725] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 05:47:11.822] apps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 05:47:11.935] daemonset.extensions/bind rolled back
I1207 05:47:12.037] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1207 05:47:12.133] apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I1207 05:47:13.514] Namespace:    namespace-1544161632-8959
I1207 05:47:13.514] Selector:     app=guestbook,tier=frontend
I1207 05:47:13.514] Labels:       app=guestbook
I1207 05:47:13.515]               tier=frontend
I1207 05:47:13.515] Annotations:  <none>
I1207 05:47:13.515] Replicas:     3 current / 3 desired
I1207 05:47:13.515] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:13.515] Pod Template:
I1207 05:47:13.515]   Labels:  app=guestbook
I1207 05:47:13.515]            tier=frontend
I1207 05:47:13.515]   Containers:
I1207 05:47:13.515]    php-redis:
I1207 05:47:13.515]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 05:47:13.634] Namespace:    namespace-1544161632-8959
I1207 05:47:13.635] Selector:     app=guestbook,tier=frontend
I1207 05:47:13.635] Labels:       app=guestbook
I1207 05:47:13.635]               tier=frontend
I1207 05:47:13.635] Annotations:  <none>
I1207 05:47:13.635] Replicas:     3 current / 3 desired
I1207 05:47:13.635] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:13.635] Pod Template:
I1207 05:47:13.635]   Labels:  app=guestbook
I1207 05:47:13.636]            tier=frontend
I1207 05:47:13.636]   Containers:
I1207 05:47:13.636]    php-redis:
I1207 05:47:13.636]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 24 lines ...
I1207 05:47:13.841] Namespace:    namespace-1544161632-8959
I1207 05:47:13.841] Selector:     app=guestbook,tier=frontend
I1207 05:47:13.841] Labels:       app=guestbook
I1207 05:47:13.842]               tier=frontend
I1207 05:47:13.842] Annotations:  <none>
I1207 05:47:13.842] Replicas:     3 current / 3 desired
I1207 05:47:13.842] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:13.842] Pod Template:
I1207 05:47:13.842]   Labels:  app=guestbook
I1207 05:47:13.842]            tier=frontend
I1207 05:47:13.843]   Containers:
I1207 05:47:13.843]    php-redis:
I1207 05:47:13.843]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1207 05:47:13.869] Namespace:    namespace-1544161632-8959
I1207 05:47:13.869] Selector:     app=guestbook,tier=frontend
I1207 05:47:13.869] Labels:       app=guestbook
I1207 05:47:13.869]               tier=frontend
I1207 05:47:13.869] Annotations:  <none>
I1207 05:47:13.869] Replicas:     3 current / 3 desired
I1207 05:47:13.869] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:13.870] Pod Template:
I1207 05:47:13.870]   Labels:  app=guestbook
I1207 05:47:13.870]            tier=frontend
I1207 05:47:13.870]   Containers:
I1207 05:47:13.870]    php-redis:
I1207 05:47:13.870]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1207 05:47:14.029] Namespace:    namespace-1544161632-8959
I1207 05:47:14.029] Selector:     app=guestbook,tier=frontend
I1207 05:47:14.029] Labels:       app=guestbook
I1207 05:47:14.029]               tier=frontend
I1207 05:47:14.029] Annotations:  <none>
I1207 05:47:14.029] Replicas:     3 current / 3 desired
I1207 05:47:14.030] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:14.030] Pod Template:
I1207 05:47:14.030]   Labels:  app=guestbook
I1207 05:47:14.030]            tier=frontend
I1207 05:47:14.030]   Containers:
I1207 05:47:14.030]    php-redis:
I1207 05:47:14.030]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 05:47:14.143] Namespace:    namespace-1544161632-8959
I1207 05:47:14.144] Selector:     app=guestbook,tier=frontend
I1207 05:47:14.144] Labels:       app=guestbook
I1207 05:47:14.144]               tier=frontend
I1207 05:47:14.144] Annotations:  <none>
I1207 05:47:14.144] Replicas:     3 current / 3 desired
I1207 05:47:14.144] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:14.144] Pod Template:
I1207 05:47:14.144]   Labels:  app=guestbook
I1207 05:47:14.144]            tier=frontend
I1207 05:47:14.145]   Containers:
I1207 05:47:14.145]    php-redis:
I1207 05:47:14.145]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 05:47:14.251] Namespace:    namespace-1544161632-8959
I1207 05:47:14.251] Selector:     app=guestbook,tier=frontend
I1207 05:47:14.251] Labels:       app=guestbook
I1207 05:47:14.251]               tier=frontend
I1207 05:47:14.251] Annotations:  <none>
I1207 05:47:14.251] Replicas:     3 current / 3 desired
I1207 05:47:14.252] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:14.252] Pod Template:
I1207 05:47:14.252]   Labels:  app=guestbook
I1207 05:47:14.252]            tier=frontend
I1207 05:47:14.252]   Containers:
I1207 05:47:14.252]    php-redis:
I1207 05:47:14.252]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1207 05:47:14.365] Namespace:    namespace-1544161632-8959
I1207 05:47:14.365] Selector:     app=guestbook,tier=frontend
I1207 05:47:14.365] Labels:       app=guestbook
I1207 05:47:14.365]               tier=frontend
I1207 05:47:14.365] Annotations:  <none>
I1207 05:47:14.365] Replicas:     3 current / 3 desired
I1207 05:47:14.365] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:14.365] Pod Template:
I1207 05:47:14.366]   Labels:  app=guestbook
I1207 05:47:14.366]            tier=frontend
I1207 05:47:14.366]   Containers:
I1207 05:47:14.366]    php-redis:
I1207 05:47:14.366]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I1207 05:47:15.233] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I1207 05:47:15.329] core.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I1207 05:47:15.421] replicationcontroller/frontend scaled
I1207 05:47:15.523] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I1207 05:47:15.607] replicationcontroller "frontend" deleted
W1207 05:47:15.708] I1207 05:47:14.568356   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161632-8959", Name:"frontend", UID:"8bc32560-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1382", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-f5b9m
W1207 05:47:15.708] error: Expected replicas to be 3, was 2
W1207 05:47:15.708] I1207 05:47:15.138491   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161632-8959", Name:"frontend", UID:"8bc32560-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wwcb5
W1207 05:47:15.709] I1207 05:47:15.427506   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161632-8959", Name:"frontend", UID:"8bc32560-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1394", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-wwcb5
W1207 05:47:15.764] I1207 05:47:15.763895   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161632-8959", Name:"redis-master", UID:"8d4176c1-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1405", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-jzrfm
I1207 05:47:15.865] replicationcontroller/redis-master created
I1207 05:47:15.919] replicationcontroller/redis-slave created
W1207 05:47:16.020] I1207 05:47:15.922557   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161632-8959", Name:"redis-slave", UID:"8d59a632-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1410", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-nhzbm
... skipping 36 lines ...
I1207 05:47:17.598] service "expose-test-deployment" deleted
I1207 05:47:17.704] Successful
I1207 05:47:17.705] message:service/expose-test-deployment exposed
I1207 05:47:17.705] has:service/expose-test-deployment exposed
I1207 05:47:17.793] service "expose-test-deployment" deleted
I1207 05:47:17.887] Successful
I1207 05:47:17.887] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 05:47:17.887] See 'kubectl expose -h' for help and examples
I1207 05:47:17.888] has:invalid deployment: no selectors
I1207 05:47:17.974] Successful
I1207 05:47:17.975] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 05:47:17.975] See 'kubectl expose -h' for help and examples
I1207 05:47:17.975] has:invalid deployment: no selectors
I1207 05:47:18.128] deployment.extensions/nginx-deployment created
W1207 05:47:18.229] I1207 05:47:18.133053   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment", UID:"8eaac30d-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1510", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-659fc6fb to 3
W1207 05:47:18.230] I1207 05:47:18.136713   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-659fc6fb", UID:"8eab803a-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1511", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-dfcm4
W1207 05:47:18.230] I1207 05:47:18.139857   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-659fc6fb", UID:"8eab803a-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1511", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-7v5jr
... skipping 23 lines ...
I1207 05:47:20.108] service "frontend" deleted
I1207 05:47:20.117] service "frontend-2" deleted
I1207 05:47:20.126] service "frontend-3" deleted
I1207 05:47:20.135] service "frontend-4" deleted
I1207 05:47:20.144] service "frontend-5" deleted
I1207 05:47:20.252] Successful
I1207 05:47:20.253] message:error: cannot expose a Node
I1207 05:47:20.253] has:cannot expose
I1207 05:47:20.346] Successful
I1207 05:47:20.346] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1207 05:47:20.346] has:metadata.name: Invalid value
I1207 05:47:20.447] Successful
I1207 05:47:20.447] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
W1207 05:47:22.595] I1207 05:47:22.307588   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161632-8959", Name:"frontend", UID:"91273bbe-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tw7gh
I1207 05:47:22.696] core.sh:1233: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I1207 05:47:22.698] horizontalpodautoscaler.autoscaling "frontend" deleted
I1207 05:47:22.800] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 05:47:22.901] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 05:47:22.984] horizontalpodautoscaler.autoscaling "frontend" deleted
W1207 05:47:23.085] Error: required flag(s) "max" not set
W1207 05:47:23.085] 
W1207 05:47:23.085] 
W1207 05:47:23.085] Examples:
W1207 05:47:23.085]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 05:47:23.086]   kubectl autoscale deployment foo --min=2 --max=10
W1207 05:47:23.086]   
... skipping 54 lines ...
I1207 05:47:23.319]           limits:
I1207 05:47:23.319]             cpu: 300m
I1207 05:47:23.319]           requests:
I1207 05:47:23.319]             cpu: 300m
I1207 05:47:23.319]       terminationGracePeriodSeconds: 0
I1207 05:47:23.319] status: {}
W1207 05:47:23.419] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I1207 05:47:23.558] deployment.extensions/nginx-deployment-resources created
I1207 05:47:23.659] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1207 05:47:23.752] core.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:47:23.850] core.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1207 05:47:23.942] deployment.extensions/nginx-deployment-resources resource requirements updated
I1207 05:47:24.042] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 3 lines ...
W1207 05:47:24.422] I1207 05:47:23.565798   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-69c96fd869", UID:"91e7f602-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-77dtc
W1207 05:47:24.423] I1207 05:47:23.567922   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-69c96fd869", UID:"91e7f602-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-7fdhq
W1207 05:47:24.423] I1207 05:47:23.569009   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-69c96fd869", UID:"91e7f602-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-5mrtg
W1207 05:47:24.423] I1207 05:47:23.946622   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources", UID:"91e755db-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W1207 05:47:24.424] I1207 05:47:23.949759   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-6c5996c457", UID:"922298ae-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-9l4x6
W1207 05:47:24.424] I1207 05:47:23.952983   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources", UID:"91e755db-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W1207 05:47:24.424] E1207 05:47:23.958704   55439 replica_set.go:450] Sync "namespace-1544161632-8959/nginx-deployment-resources-6c5996c457" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-resources-6c5996c457": the object has been modified; please apply your changes to the latest version and try again
W1207 05:47:24.425] I1207 05:47:23.958850   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-69c96fd869", UID:"91e7f602-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-77dtc
W1207 05:47:24.425] I1207 05:47:23.959207   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources", UID:"91e755db-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 2
W1207 05:47:24.426] I1207 05:47:23.969218   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-6c5996c457", UID:"922298ae-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1683", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-rgg5f
W1207 05:47:24.426] error: unable to find container named redis
W1207 05:47:24.426] I1207 05:47:24.334082   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources", UID:"91e755db-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 0
W1207 05:47:24.426] I1207 05:47:24.344256   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-69c96fd869", UID:"91e7f602-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-5mrtg
W1207 05:47:24.427] I1207 05:47:24.344620   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-69c96fd869", UID:"91e7f602-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-7fdhq
W1207 05:47:24.427] I1207 05:47:24.345074   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources", UID:"91e755db-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 2
W1207 05:47:24.428] I1207 05:47:24.349632   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-5f4579485f", UID:"925c630c-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-rnmpr
W1207 05:47:24.428] I1207 05:47:24.352999   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-5f4579485f", UID:"925c630c-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-x47kv
... skipping 77 lines ...
W1207 05:47:25.133] I1207 05:47:24.641527   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources", UID:"91e755db-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1717", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-6c5996c457 to 0
W1207 05:47:25.133] I1207 05:47:24.646707   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-6c5996c457", UID:"922298ae-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-rgg5f
W1207 05:47:25.134] I1207 05:47:24.647848   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-6c5996c457", UID:"922298ae-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-9l4x6
W1207 05:47:25.134] I1207 05:47:24.650039   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources", UID:"91e755db-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1719", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 2
W1207 05:47:25.134] I1207 05:47:24.653392   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-ff8d89cb6", UID:"928ba66b-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1727", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-frdmb
W1207 05:47:25.135] I1207 05:47:24.665178   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161632-8959", Name:"nginx-deployment-resources-ff8d89cb6", UID:"928ba66b-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1727", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-zkjhb
W1207 05:47:25.135] error: you must specify resources by --filename when --local is set.
W1207 05:47:25.135] Example resource specifications include:
W1207 05:47:25.135]    '-f rsrc.yaml'
W1207 05:47:25.135]    '--filename=rsrc.json'
I1207 05:47:25.236] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1207 05:47:25.290] core.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1207 05:47:25.388] core.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I1207 05:47:26.902]                 pod-template-hash=55c9b846cc
I1207 05:47:26.903] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1207 05:47:26.903]                 deployment.kubernetes.io/max-replicas: 2
I1207 05:47:26.903]                 deployment.kubernetes.io/revision: 1
I1207 05:47:26.903] Controlled By:  Deployment/test-nginx-apps
I1207 05:47:26.903] Replicas:       1 current / 1 desired
I1207 05:47:26.903] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:26.903] Pod Template:
I1207 05:47:26.904]   Labels:  app=test-nginx-apps
I1207 05:47:26.904]            pod-template-hash=55c9b846cc
I1207 05:47:26.904]   Containers:
I1207 05:47:26.904]    nginx:
I1207 05:47:26.904]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 87 lines ...
W1207 05:47:30.982] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W1207 05:47:30.983] I1207 05:47:30.887189   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx", UID:"95ea63ac-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1896", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 1
W1207 05:47:30.984] I1207 05:47:30.891136   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-9486b7cb7", UID:"9645a35e-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1897", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-xl4gm
W1207 05:47:30.984] I1207 05:47:30.894930   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx", UID:"95ea63ac-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1896", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-6f6bb85d9c to 2
W1207 05:47:30.984] I1207 05:47:30.900814   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx", UID:"95ea63ac-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1900", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 2
W1207 05:47:30.985] I1207 05:47:30.901799   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-6f6bb85d9c", UID:"95eb1957-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1903", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-6f6bb85d9c-bpg9q
W1207 05:47:30.985] E1207 05:47:30.901837   55439 replica_set.go:450] Sync "namespace-1544161645-13655/nginx-9486b7cb7" failed with Operation cannot be fulfilled on replicasets.apps "nginx-9486b7cb7": the object has been modified; please apply your changes to the latest version and try again
W1207 05:47:30.986] I1207 05:47:30.907747   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-9486b7cb7", UID:"9645a35e-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1906", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-zjz5m
I1207 05:47:31.086] apps.sh:293: Successful get deployment.extensions {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 05:47:31.109]     Image:	k8s.gcr.io/nginx:test-cmd
I1207 05:47:31.207] apps.sh:296: Successful get deployment.extensions {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 05:47:31.332] deployment.extensions/nginx rolled back
I1207 05:47:32.438] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:47:32.641] apps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:47:32.757] deployment.extensions/nginx rolled back
W1207 05:47:32.858] error: unable to find specified revision 1000000 in history
I1207 05:47:33.860] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 05:47:33.954] deployment.extensions/nginx paused
W1207 05:47:34.070] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I1207 05:47:34.171] deployment.extensions/nginx resumed
I1207 05:47:34.290] deployment.extensions/nginx rolled back
I1207 05:47:34.493]     deployment.kubernetes.io/revision-history: 1,3
W1207 05:47:34.686] error: desired revision (3) is different from the running revision (5)
I1207 05:47:34.845] deployment.extensions/nginx2 created
I1207 05:47:34.937] deployment.extensions "nginx2" deleted
I1207 05:47:35.029] deployment.extensions "nginx" deleted
I1207 05:47:35.132] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:47:35.281] deployment.extensions/nginx-deployment created
W1207 05:47:35.382] I1207 05:47:34.849272   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx2", UID:"98a1856e-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1937", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-6b58f7cc65 to 3
... skipping 10 lines ...
I1207 05:47:35.679] deployment.extensions/nginx-deployment image updated
W1207 05:47:35.780] I1207 05:47:35.683583   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"98e419fe-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1985", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W1207 05:47:35.780] I1207 05:47:35.687468   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-85db47bbdb", UID:"99217cc3-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1986", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-nnxjb
W1207 05:47:35.781] I1207 05:47:35.692166   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"98e419fe-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1985", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 05:47:35.781] I1207 05:47:35.696636   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-646d4f779d", UID:"98e4e0b2-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-pgg4f
W1207 05:47:35.782] I1207 05:47:35.699574   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"98e419fe-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1987", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 2
W1207 05:47:35.782] E1207 05:47:35.700778   55439 replica_set.go:450] Sync "namespace-1544161645-13655/nginx-deployment-85db47bbdb" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-85db47bbdb": the object has been modified; please apply your changes to the latest version and try again
W1207 05:47:35.782] I1207 05:47:35.707997   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-85db47bbdb", UID:"99217cc3-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-hlkq5
I1207 05:47:35.883] apps.sh:337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 05:47:35.906] apps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1207 05:47:36.106] deployment.extensions/nginx-deployment image updated
I1207 05:47:36.210] apps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:47:36.305] apps.sh:344: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 7 lines ...
I1207 05:47:37.135] apps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:47:37.323] apps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:47:37.421] apps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:47:37.506] deployment.extensions "nginx-deployment" deleted
I1207 05:47:37.605] apps.sh:366: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:47:37.751] deployment.extensions/nginx-deployment created
W1207 05:47:37.851] error: unable to find container named "redis"
W1207 05:47:37.852] I1207 05:47:36.948880   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"98e419fe-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2018", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1207 05:47:37.852] I1207 05:47:36.954967   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-646d4f779d", UID:"98e4e0b2-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-5n54h
W1207 05:47:37.853] I1207 05:47:36.955229   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-646d4f779d", UID:"98e4e0b2-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-jmmzk
W1207 05:47:37.853] I1207 05:47:36.956525   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"98e419fe-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2021", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 2
W1207 05:47:37.853] I1207 05:47:36.958270   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-dc756cc6", UID:"99e19587-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-52nv5
W1207 05:47:37.853] I1207 05:47:36.961222   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-dc756cc6", UID:"99e19587-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-znh78
... skipping 9 lines ...
I1207 05:47:38.346] apps.sh:374: Successful get secret {{range.items}}{{.metadata.name}}:{{end}}: test-set-env-secret:
I1207 05:47:38.455] deployment.extensions/nginx-deployment env updated
W1207 05:47:38.556] I1207 05:47:38.459609   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"9a5cf98e-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 1
W1207 05:47:38.556] I1207 05:47:38.463859   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-5b795689cd", UID:"9ac91551-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-ld8kq
W1207 05:47:38.556] I1207 05:47:38.466989   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"9a5cf98e-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 05:47:38.557] I1207 05:47:38.473756   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"9a5cf98e-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2077", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 2
W1207 05:47:38.557] E1207 05:47:38.475566   55439 replica_set.go:450] Sync "namespace-1544161645-13655/nginx-deployment-5b795689cd" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5b795689cd": the object has been modified; please apply your changes to the latest version and try again
W1207 05:47:38.557] I1207 05:47:38.476044   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-646d4f779d", UID:"9a5d9395-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2078", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-xltjv
W1207 05:47:38.558] I1207 05:47:38.479468   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-5b795689cd", UID:"9ac91551-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2083", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-k9b6p
I1207 05:47:38.658] apps.sh:378: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
I1207 05:47:38.664] apps.sh:380: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
I1207 05:47:38.778] deployment.extensions/nginx-deployment env updated
W1207 05:47:38.879] I1207 05:47:38.792402   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"9a5cf98e-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2097", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
... skipping 13 lines ...
I1207 05:47:39.224] deployment.extensions/nginx-deployment env updated
I1207 05:47:39.224] deployment.extensions/nginx-deployment env updated
W1207 05:47:39.325] I1207 05:47:39.234940   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"9a5cf98e-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2146", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-794dcdf6bb to 0
W1207 05:47:39.326] I1207 05:47:39.315521   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-5b795689cd", UID:"9ac91551-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-ld8kq
W1207 05:47:39.361] I1207 05:47:39.360311   55439 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment", UID:"9a5cf98e-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2148", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-669d4f8fc9 to 2
W1207 05:47:39.366] I1207 05:47:39.365966   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161645-13655", Name:"nginx-deployment-5b795689cd", UID:"9ac91551-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-k9b6p
W1207 05:47:39.464] E1207 05:47:39.463356   55439 replica_set.go:450] Sync "namespace-1544161645-13655/nginx-deployment-794dcdf6bb" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-794dcdf6bb": the object has been modified; please apply your changes to the latest version and try again
I1207 05:47:39.564] deployment.extensions/nginx-deployment env updated
I1207 05:47:39.565] deployment.extensions/nginx-deployment env updated
I1207 05:47:39.565] deployment.extensions "nginx-deployment" deleted
I1207 05:47:39.651] configmap "test-set-env-config" deleted
I1207 05:47:39.743] secret "test-set-env-secret" deleted
I1207 05:47:39.766] +++ exit code: 0
... skipping 11 lines ...
I1207 05:47:40.283] replicaset.extensions/frontend created
I1207 05:47:40.296] +++ [1207 05:47:40] Deleting rs
I1207 05:47:40.385] replicaset.extensions "frontend" deleted
I1207 05:47:40.489] apps.sh:508: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:47:40.588] apps.sh:512: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:47:40.750] replicaset.extensions/frontend-no-cascade created
W1207 05:47:40.850] E1207 05:47:39.612845   55439 replica_set.go:450] Sync "namespace-1544161645-13655/nginx-deployment-65b869c68c" failed with replicasets.apps "nginx-deployment-65b869c68c" not found
W1207 05:47:40.851] E1207 05:47:39.863020   55439 replica_set.go:450] Sync "namespace-1544161645-13655/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
W1207 05:47:40.851] E1207 05:47:39.912573   55439 replica_set.go:450] Sync "namespace-1544161645-13655/nginx-deployment-5b795689cd" failed with replicasets.apps "nginx-deployment-5b795689cd" not found
W1207 05:47:40.851] E1207 05:47:39.962980   55439 replica_set.go:450] Sync "namespace-1544161645-13655/nginx-deployment-5766b7c95b" failed with replicasets.apps "nginx-deployment-5766b7c95b" not found
W1207 05:47:40.851] E1207 05:47:40.012853   55439 replica_set.go:450] Sync "namespace-1544161645-13655/nginx-deployment-794dcdf6bb" failed with replicasets.apps "nginx-deployment-794dcdf6bb" not found
W1207 05:47:40.852] I1207 05:47:40.290949   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161659-2217", Name:"frontend", UID:"9bdf5f15-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2176", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-c5tkt
W1207 05:47:40.852] I1207 05:47:40.294459   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161659-2217", Name:"frontend", UID:"9bdf5f15-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2176", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j2jkn
W1207 05:47:40.853] I1207 05:47:40.294750   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161659-2217", Name:"frontend", UID:"9bdf5f15-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2176", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qgwr4
W1207 05:47:40.853] E1207 05:47:40.512761   55439 replica_set.go:450] Sync "namespace-1544161659-2217/frontend" failed with replicasets.apps "frontend" not found
W1207 05:47:40.853] I1207 05:47:40.753230   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161659-2217", Name:"frontend-no-cascade", UID:"9c267bde-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2191", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-cj5rv
W1207 05:47:40.853] I1207 05:47:40.756391   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161659-2217", Name:"frontend-no-cascade", UID:"9c267bde-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2191", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-qv7n8
W1207 05:47:40.854] I1207 05:47:40.756847   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161659-2217", Name:"frontend-no-cascade", UID:"9c267bde-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2191", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-w78dp
I1207 05:47:40.954] apps.sh:518: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I1207 05:47:40.955] +++ [1207 05:47:40] Deleting rs
I1207 05:47:40.955] replicaset.extensions "frontend-no-cascade" deleted
... skipping 11 lines ...
I1207 05:47:41.894] Namespace:    namespace-1544161659-2217
I1207 05:47:41.895] Selector:     app=guestbook,tier=frontend
I1207 05:47:41.895] Labels:       app=guestbook
I1207 05:47:41.895]               tier=frontend
I1207 05:47:41.895] Annotations:  <none>
I1207 05:47:41.895] Replicas:     3 current / 3 desired
I1207 05:47:41.895] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:41.895] Pod Template:
I1207 05:47:41.895]   Labels:  app=guestbook
I1207 05:47:41.896]            tier=frontend
I1207 05:47:41.896]   Containers:
I1207 05:47:41.896]    php-redis:
I1207 05:47:41.896]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 05:47:42.019] Namespace:    namespace-1544161659-2217
I1207 05:47:42.019] Selector:     app=guestbook,tier=frontend
I1207 05:47:42.019] Labels:       app=guestbook
I1207 05:47:42.019]               tier=frontend
I1207 05:47:42.019] Annotations:  <none>
I1207 05:47:42.019] Replicas:     3 current / 3 desired
I1207 05:47:42.019] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:42.019] Pod Template:
I1207 05:47:42.019]   Labels:  app=guestbook
I1207 05:47:42.020]            tier=frontend
I1207 05:47:42.020]   Containers:
I1207 05:47:42.020]    php-redis:
I1207 05:47:42.020]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
I1207 05:47:42.021]   Type    Reason            Age   From                   Message
I1207 05:47:42.021]   ----    ------            ----  ----                   -------
I1207 05:47:42.021]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-gnd7r
I1207 05:47:42.021]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-lx7cz
I1207 05:47:42.021]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-shmrs
I1207 05:47:42.021] 
W1207 05:47:42.122] E1207 05:47:41.062383   55439 replica_set.go:450] Sync "namespace-1544161659-2217/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W1207 05:47:42.122] I1207 05:47:41.641835   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161659-2217", Name:"frontend", UID:"9cae23c1-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gnd7r
W1207 05:47:42.123] I1207 05:47:41.645125   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161659-2217", Name:"frontend", UID:"9cae23c1-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lx7cz
W1207 05:47:42.123] I1207 05:47:41.645512   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161659-2217", Name:"frontend", UID:"9cae23c1-f9e3-11e8-a909-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-shmrs
I1207 05:47:42.223] apps.sh:541: Successful describe
I1207 05:47:42.223] Name:         frontend
I1207 05:47:42.224] Namespace:    namespace-1544161659-2217
I1207 05:47:42.224] Selector:     app=guestbook,tier=frontend
I1207 05:47:42.224] Labels:       app=guestbook
I1207 05:47:42.224]               tier=frontend
I1207 05:47:42.224] Annotations:  <none>
I1207 05:47:42.224] Replicas:     3 current / 3 desired
I1207 05:47:42.224] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:42.224] Pod Template:
I1207 05:47:42.224]   Labels:  app=guestbook
I1207 05:47:42.224]            tier=frontend
I1207 05:47:42.224]   Containers:
I1207 05:47:42.224]    php-redis:
I1207 05:47:42.225]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I1207 05:47:42.255] Namespace:    namespace-1544161659-2217
I1207 05:47:42.255] Selector:     app=guestbook,tier=frontend
I1207 05:47:42.255] Labels:       app=guestbook
I1207 05:47:42.255]               tier=frontend
I1207 05:47:42.255] Annotations:  <none>
I1207 05:47:42.255] Replicas:     3 current / 3 desired
I1207 05:47:42.255] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:42.256] Pod Template:
I1207 05:47:42.256]   Labels:  app=guestbook
I1207 05:47:42.256]            tier=frontend
I1207 05:47:42.256]   Containers:
I1207 05:47:42.256]    php-redis:
I1207 05:47:42.256]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1207 05:47:42.403] Namespace:    namespace-1544161659-2217
I1207 05:47:42.403] Selector:     app=guestbook,tier=frontend
I1207 05:47:42.403] Labels:       app=guestbook
I1207 05:47:42.403]               tier=frontend
I1207 05:47:42.403] Annotations:  <none>
I1207 05:47:42.404] Replicas:     3 current / 3 desired
I1207 05:47:42.404] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:42.404] Pod Template:
I1207 05:47:42.404]   Labels:  app=guestbook
I1207 05:47:42.404]            tier=frontend
I1207 05:47:42.404]   Containers:
I1207 05:47:42.404]    php-redis:
I1207 05:47:42.404]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 05:47:42.535] Namespace:    namespace-1544161659-2217
I1207 05:47:42.535] Selector:     app=guestbook,tier=frontend
I1207 05:47:42.535] Labels:       app=guestbook
I1207 05:47:42.535]               tier=frontend
I1207 05:47:42.536] Annotations:  <none>
I1207 05:47:42.536] Replicas:     3 current / 3 desired
I1207 05:47:42.536] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:42.536] Pod Template:
I1207 05:47:42.536]   Labels:  app=guestbook
I1207 05:47:42.536]            tier=frontend
I1207 05:47:42.536]   Containers:
I1207 05:47:42.537]    php-redis:
I1207 05:47:42.537]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 05:47:42.666] Namespace:    namespace-1544161659-2217
I1207 05:47:42.666] Selector:     app=guestbook,tier=frontend
I1207 05:47:42.666] Labels:       app=guestbook
I1207 05:47:42.667]               tier=frontend
I1207 05:47:42.667] Annotations:  <none>
I1207 05:47:42.667] Replicas:     3 current / 3 desired
I1207 05:47:42.667] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:42.668] Pod Template:
I1207 05:47:42.668]   Labels:  app=guestbook
I1207 05:47:42.668]            tier=frontend
I1207 05:47:42.668]   Containers:
I1207 05:47:42.668]    php-redis:
I1207 05:47:42.669]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1207 05:47:42.814] Namespace:    namespace-1544161659-2217
I1207 05:47:42.814] Selector:     app=guestbook,tier=frontend
I1207 05:47:42.814] Labels:       app=guestbook
I1207 05:47:42.814]               tier=frontend
I1207 05:47:42.814] Annotations:  <none>
I1207 05:47:42.815] Replicas:     3 current / 3 desired
I1207 05:47:42.815] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:42.815] Pod Template:
I1207 05:47:42.815]   Labels:  app=guestbook
I1207 05:47:42.815]            tier=frontend
I1207 05:47:42.815]   Containers:
I1207 05:47:42.816]    php-redis:
I1207 05:47:42.816]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I1207 05:47:49.424] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 05:47:49.552] apps.sh:643: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I1207 05:47:49.656] horizontalpodautoscaler.autoscaling "frontend" deleted
I1207 05:47:49.784] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 05:47:49.908] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 05:47:50.023] horizontalpodautoscaler.autoscaling "frontend" deleted
W1207 05:47:50.123] Error: required flag(s) "max" not set
W1207 05:47:50.124] 
W1207 05:47:50.124] 
W1207 05:47:50.124] Examples:
W1207 05:47:50.124]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 05:47:50.125]   kubectl autoscale deployment foo --min=2 --max=10
W1207 05:47:50.125]   
... skipping 87 lines ...
I1207 05:47:53.371] apps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 05:47:53.466] apps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 05:47:53.571] statefulset.apps/nginx rolled back
I1207 05:47:53.666] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 05:47:53.759] apps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 05:47:53.882] Successful
I1207 05:47:53.882] message:error: unable to find specified revision 1000000 in history
I1207 05:47:53.882] has:unable to find specified revision
W1207 05:47:53.983] I1207 05:47:51.590004   55439 stateful_set.go:427] StatefulSet has been deleted namespace-1544161670-17035/nginx
I1207 05:47:54.083] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 05:47:54.098] apps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 05:47:54.227] statefulset.apps/nginx rolled back
I1207 05:47:54.341] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
... skipping 59 lines ...
I1207 05:47:56.481] Name:         mock
I1207 05:47:56.481] Namespace:    namespace-1544161675-15791
I1207 05:47:56.482] Selector:     app=mock
I1207 05:47:56.482] Labels:       app=mock
I1207 05:47:56.482] Annotations:  <none>
I1207 05:47:56.482] Replicas:     1 current / 1 desired
I1207 05:47:56.482] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:56.482] Pod Template:
I1207 05:47:56.483]   Labels:  app=mock
I1207 05:47:56.483]   Containers:
I1207 05:47:56.483]    mock-container:
I1207 05:47:56.483]     Image:        k8s.gcr.io/pause:2.0
I1207 05:47:56.483]     Port:         9949/TCP
... skipping 56 lines ...
I1207 05:47:58.920] Name:         mock
I1207 05:47:58.920] Namespace:    namespace-1544161675-15791
I1207 05:47:58.920] Selector:     app=mock
I1207 05:47:58.920] Labels:       app=mock
I1207 05:47:58.920] Annotations:  <none>
I1207 05:47:58.920] Replicas:     1 current / 1 desired
I1207 05:47:58.921] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:47:58.921] Pod Template:
I1207 05:47:58.921]   Labels:  app=mock
I1207 05:47:58.921]   Containers:
I1207 05:47:58.921]    mock-container:
I1207 05:47:58.921]     Image:        k8s.gcr.io/pause:2.0
I1207 05:47:58.921]     Port:         9949/TCP
... skipping 56 lines ...
I1207 05:48:01.067] Name:         mock
I1207 05:48:01.067] Namespace:    namespace-1544161675-15791
I1207 05:48:01.067] Selector:     app=mock
I1207 05:48:01.067] Labels:       app=mock
I1207 05:48:01.068] Annotations:  <none>
I1207 05:48:01.068] Replicas:     1 current / 1 desired
I1207 05:48:01.068] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:48:01.068] Pod Template:
I1207 05:48:01.068]   Labels:  app=mock
I1207 05:48:01.068]   Containers:
I1207 05:48:01.068]    mock-container:
I1207 05:48:01.068]     Image:        k8s.gcr.io/pause:2.0
I1207 05:48:01.068]     Port:         9949/TCP
... skipping 42 lines ...
I1207 05:48:03.016] Namespace:    namespace-1544161675-15791
I1207 05:48:03.016] Selector:     app=mock
I1207 05:48:03.017] Labels:       app=mock
I1207 05:48:03.017]               status=replaced
I1207 05:48:03.017] Annotations:  <none>
I1207 05:48:03.017] Replicas:     1 current / 1 desired
I1207 05:48:03.017] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:48:03.017] Pod Template:
I1207 05:48:03.017]   Labels:  app=mock
I1207 05:48:03.017]   Containers:
I1207 05:48:03.017]    mock-container:
I1207 05:48:03.017]     Image:        k8s.gcr.io/pause:2.0
I1207 05:48:03.017]     Port:         9949/TCP
... skipping 11 lines ...
I1207 05:48:03.018] Namespace:    namespace-1544161675-15791
I1207 05:48:03.018] Selector:     app=mock2
I1207 05:48:03.018] Labels:       app=mock2
I1207 05:48:03.018]               status=replaced
I1207 05:48:03.019] Annotations:  <none>
I1207 05:48:03.019] Replicas:     1 current / 1 desired
I1207 05:48:03.019] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:48:03.019] Pod Template:
I1207 05:48:03.019]   Labels:  app=mock2
I1207 05:48:03.019]   Containers:
I1207 05:48:03.019]    mock-container:
I1207 05:48:03.019]     Image:        k8s.gcr.io/pause:2.0
I1207 05:48:03.019]     Port:         9949/TCP
... skipping 105 lines ...
I1207 05:48:08.126] +++ [1207 05:48:08] Creating namespace namespace-1544161688-31430
I1207 05:48:08.208] namespace/namespace-1544161688-31430 created
I1207 05:48:08.280] Context "test" modified.
I1207 05:48:08.287] +++ [1207 05:48:08] Testing persistent volumes
I1207 05:48:08.389] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:48:08.554] persistentvolume/pv0001 created
W1207 05:48:08.655] E1207 05:48:08.562072   55439 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I1207 05:48:08.756] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I1207 05:48:08.756] persistentvolume "pv0001" deleted
I1207 05:48:08.915] persistentvolume/pv0002 created
W1207 05:48:09.015] E1207 05:48:08.917951   55439 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I1207 05:48:09.116] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I1207 05:48:09.116] persistentvolume "pv0002" deleted
I1207 05:48:09.270] persistentvolume/pv0003 created
I1207 05:48:09.370] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I1207 05:48:09.453] persistentvolume "pv0003" deleted
W1207 05:48:09.554] E1207 05:48:09.272650   55439 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I1207 05:48:09.655] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:48:09.655] +++ exit code: 0
I1207 05:48:09.655] Recording: run_persistent_volume_claims_tests
I1207 05:48:09.655] Running command: run_persistent_volume_claims_tests
I1207 05:48:09.655] 
I1207 05:48:09.655] +++ Running case: test-cmd.run_persistent_volume_claims_tests 
... skipping 472 lines ...
I1207 05:48:14.331] yes
I1207 05:48:14.331] has:the server doesn't have a resource type
I1207 05:48:14.409] Successful
I1207 05:48:14.409] message:yes
I1207 05:48:14.409] has:yes
I1207 05:48:14.490] Successful
I1207 05:48:14.490] message:error: --subresource can not be used with NonResourceURL
I1207 05:48:14.490] has:subresource can not be used with NonResourceURL
I1207 05:48:14.575] Successful
I1207 05:48:14.659] Successful
I1207 05:48:14.660] message:yes
I1207 05:48:14.660] 0
I1207 05:48:14.660] has:0
... skipping 6 lines ...
I1207 05:48:14.873] role.rbac.authorization.k8s.io/testing-R reconciled
I1207 05:48:14.973] legacy-script.sh:736: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I1207 05:48:15.070] legacy-script.sh:737: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I1207 05:48:15.164] legacy-script.sh:738: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I1207 05:48:15.260] legacy-script.sh:739: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I1207 05:48:15.344] Successful
I1207 05:48:15.344] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I1207 05:48:15.344] has:only rbac.authorization.k8s.io/v1 is supported
I1207 05:48:15.438] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I1207 05:48:15.446] role.rbac.authorization.k8s.io "testing-R" deleted
I1207 05:48:15.457] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I1207 05:48:15.469] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I1207 05:48:15.483] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I1207 05:48:16.697] +++ Running case: test-cmd.run_kubectl_explain_tests 
I1207 05:48:16.700] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:48:16.703] +++ command: run_kubectl_explain_tests
I1207 05:48:16.714] +++ [1207 05:48:16] Testing kubectl(v1:explain)
W1207 05:48:16.815] I1207 05:48:16.559417   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161695-4976", Name:"cassandra", UID:"b13ae6d5-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"2755", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-pr5wr
W1207 05:48:16.815] I1207 05:48:16.568420   55439 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161695-4976", Name:"cassandra", UID:"b13ae6d5-f9e3-11e8-a909-0242ac110002", APIVersion:"v1", ResourceVersion:"2762", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-68wnz
W1207 05:48:16.815] E1207 05:48:16.575455   55439 replica_set.go:450] Sync "namespace-1544161695-4976/cassandra" failed with replicationcontrollers "cassandra" not found
I1207 05:48:16.916] KIND:     Pod
I1207 05:48:16.916] VERSION:  v1
I1207 05:48:16.916] 
I1207 05:48:16.916] DESCRIPTION:
I1207 05:48:16.917]      Pod is a collection of containers that can run on a host. This resource is
I1207 05:48:16.917]      created by clients and scheduled onto hosts.
... skipping 849 lines ...
I1207 05:48:43.449] message:node/127.0.0.1 already uncordoned (dry run)
I1207 05:48:43.450] has:already uncordoned
I1207 05:48:43.555] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I1207 05:48:43.643] node/127.0.0.1 labeled
I1207 05:48:43.745] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I1207 05:48:43.817] Successful
I1207 05:48:43.818] message:error: cannot specify both a node name and a --selector option
I1207 05:48:43.818] See 'kubectl drain -h' for help and examples
I1207 05:48:43.818] has:cannot specify both a node name
I1207 05:48:43.889] Successful
I1207 05:48:43.890] message:error: USAGE: cordon NODE [flags]
I1207 05:48:43.890] See 'kubectl cordon -h' for help and examples
I1207 05:48:43.890] has:error\: USAGE\: cordon NODE
I1207 05:48:43.971] node/127.0.0.1 already uncordoned
I1207 05:48:44.052] Successful
I1207 05:48:44.052] message:error: You must provide one or more resources by argument or filename.
I1207 05:48:44.052] Example resource specifications include:
I1207 05:48:44.052]    '-f rsrc.yaml'
I1207 05:48:44.052]    '--filename=rsrc.json'
I1207 05:48:44.053]    '<resource> <name>'
I1207 05:48:44.053]    '<resource>'
I1207 05:48:44.053] has:must provide one or more resources
... skipping 15 lines ...
I1207 05:48:44.502] Successful
I1207 05:48:44.503] message:The following kubectl-compatible plugins are available:
I1207 05:48:44.503] 
I1207 05:48:44.503] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I1207 05:48:44.504]   - warning: kubectl-version overwrites existing command: "kubectl version"
I1207 05:48:44.504] 
I1207 05:48:44.504] error: one plugin warning was found
I1207 05:48:44.504] has:kubectl-version overwrites existing command: "kubectl version"
I1207 05:48:44.580] Successful
I1207 05:48:44.580] message:The following kubectl-compatible plugins are available:
I1207 05:48:44.580] 
I1207 05:48:44.580] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 05:48:44.581] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I1207 05:48:44.581]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 05:48:44.581] 
I1207 05:48:44.581] error: one plugin warning was found
I1207 05:48:44.581] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I1207 05:48:44.653] Successful
I1207 05:48:44.654] message:The following kubectl-compatible plugins are available:
I1207 05:48:44.654] 
I1207 05:48:44.654] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 05:48:44.654] has:plugins are available
I1207 05:48:44.727] Successful
I1207 05:48:44.728] message:
I1207 05:48:44.728] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I1207 05:48:44.728] error: unable to find any kubectl plugins in your PATH
I1207 05:48:44.728] has:unable to find any kubectl plugins in your PATH
I1207 05:48:44.799] Successful
I1207 05:48:44.800] message:I am plugin foo
I1207 05:48:44.800] has:plugin foo
I1207 05:48:44.876] Successful
I1207 05:48:44.877] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.901+5d76949082d149", GitCommit:"5d76949082d14918dea6d2bae668bb58512a4408", GitTreeState:"clean", BuildDate:"2018-12-07T05:41:58Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I1207 05:48:44.959] 
I1207 05:48:44.961] +++ Running case: test-cmd.run_impersonation_tests 
I1207 05:48:44.964] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:48:44.967] +++ command: run_impersonation_tests
I1207 05:48:44.976] +++ [1207 05:48:44] Testing impersonation
I1207 05:48:45.050] Successful
I1207 05:48:45.050] message:error: requesting groups or user-extra for  without impersonating a user
I1207 05:48:45.050] has:without impersonating a user
I1207 05:48:45.206] certificatesigningrequest.certificates.k8s.io/foo created
I1207 05:48:45.307] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I1207 05:48:45.405] authorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I1207 05:48:45.492] certificatesigningrequest.certificates.k8s.io "foo" deleted
I1207 05:48:45.660] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 44 lines ...
W1207 05:48:46.194] I1207 05:48:46.189873   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.194] I1207 05:48:46.189908   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.194] I1207 05:48:46.189921   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.194] I1207 05:48:46.190020   52110 secure_serving.go:156] Stopped listening on 127.0.0.1:6443
W1207 05:48:46.194] I1207 05:48:46.190058   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.194] I1207 05:48:46.190071   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.195] W1207 05:48:46.190081   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.195] W1207 05:48:46.190085   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.195] W1207 05:48:46.190177   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.195] W1207 05:48:46.190232   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.195] W1207 05:48:46.190266   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.196] W1207 05:48:46.190299   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.196] W1207 05:48:46.190355   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.196] W1207 05:48:46.190421   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.197] W1207 05:48:46.190476   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.197] W1207 05:48:46.190530   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.197] I1207 05:48:46.190538   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.197] I1207 05:48:46.190553   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.197] W1207 05:48:46.190584   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.198] W1207 05:48:46.190611   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.198] W1207 05:48:46.190660   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.198] W1207 05:48:46.190817   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.198] I1207 05:48:46.190935   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.198] I1207 05:48:46.190957   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.198] I1207 05:48:46.190977   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.198] I1207 05:48:46.190991   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.199] W1207 05:48:46.190664   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.199] I1207 05:48:46.191292   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.199] I1207 05:48:46.191302   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.199] I1207 05:48:46.191331   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.199] I1207 05:48:46.191350   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.199] I1207 05:48:46.191387   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.199] I1207 05:48:46.191400   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 66 lines ...
W1207 05:48:46.208] I1207 05:48:46.193060   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W1207 05:48:46.208] I1207 05:48:46.193061   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.208] I1207 05:48:46.193082   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.208] I1207 05:48:46.193254   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.208] I1207 05:48:46.193310   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.208] I1207 05:48:46.193319   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.209] E1207 05:48:46.193370   52110 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W1207 05:48:46.209] I1207 05:48:46.193412   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.209] I1207 05:48:46.193451   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.209] I1207 05:48:46.193484   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.209] I1207 05:48:46.193527   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.209] W1207 05:48:46.193527   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.209] W1207 05:48:46.193543   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.210] W1207 05:48:46.193580   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.210] W1207 05:48:46.193586   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.210] W1207 05:48:46.193591   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.210] W1207 05:48:46.193609   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.210] W1207 05:48:46.193631   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.211] W1207 05:48:46.193630   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.211] W1207 05:48:46.193633   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.211] W1207 05:48:46.193637   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.211] W1207 05:48:46.193662   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.211] W1207 05:48:46.193664   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.212] W1207 05:48:46.193668   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.212] W1207 05:48:46.193670   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.212] W1207 05:48:46.193702   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.212] W1207 05:48:46.193706   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.212] W1207 05:48:46.193739   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.213] W1207 05:48:46.193741   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.213] W1207 05:48:46.193749   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.213] W1207 05:48:46.193750   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.213] W1207 05:48:46.193760   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.213] W1207 05:48:46.193760   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.213] W1207 05:48:46.193772   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.214] W1207 05:48:46.193779   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.214] W1207 05:48:46.193781   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.214] W1207 05:48:46.193795   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.214] W1207 05:48:46.193803   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.214] W1207 05:48:46.193818   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.215] W1207 05:48:46.193825   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.215] W1207 05:48:46.193830   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.215] W1207 05:48:46.193836   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.215] W1207 05:48:46.193851   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.215] W1207 05:48:46.193853   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.215] W1207 05:48:46.193875   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.216] W1207 05:48:46.193883   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.216] W1207 05:48:46.193878   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.216] W1207 05:48:46.193885   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.216] W1207 05:48:46.193898   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.216] W1207 05:48:46.193911   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.217] W1207 05:48:46.193919   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.217] W1207 05:48:46.193927   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.217] W1207 05:48:46.193925   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.217] W1207 05:48:46.193951   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.217] W1207 05:48:46.193957   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.218] W1207 05:48:46.193964   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.218] W1207 05:48:46.193971   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.218] W1207 05:48:46.193985   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.218] W1207 05:48:46.194009   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.218] W1207 05:48:46.194010   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.219] W1207 05:48:46.194022   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.219] W1207 05:48:46.194019   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.219] W1207 05:48:46.194036   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.219] I1207 05:48:46.194051   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.219] W1207 05:48:46.194055   52110 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:48:46.219] I1207 05:48:46.194062   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.220] I1207 05:48:46.194239   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.220] I1207 05:48:46.194258   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.220] I1207 05:48:46.194319   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.220] I1207 05:48:46.194339   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:48:46.220] I1207 05:48:46.194368   52110 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 72 lines ...
I1207 05:59:04.274] ok  	k8s.io/kubernetes/test/integration/dryrun	18.429s
I1207 05:59:04.274] [restful] 2018/12/07 05:50:56 log.go:33: [restful/swagger] listing is available at https://172.17.0.2:45463/swaggerapi
I1207 05:59:04.274] [restful] 2018/12/07 05:50:56 log.go:33: [restful/swagger] https://172.17.0.2:45463/swaggerui/ is mapped to folder /swagger-ui/
I1207 05:59:04.274] [restful] 2018/12/07 05:50:58 log.go:33: [restful/swagger] listing is available at https://172.17.0.2:45463/swaggerapi
I1207 05:59:04.274] [restful] 2018/12/07 05:50:58 log.go:33: [restful/swagger] https://172.17.0.2:45463/swaggerui/ is mapped to folder /swagger-ui/
I1207 05:59:04.274] ok  	k8s.io/kubernetes/test/integration/etcd	27.453s
I1207 05:59:04.274] FAIL	k8s.io/kubernetes/test/integration/evictions	15.036s
I1207 05:59:04.275] [restful] 2018/12/07 05:51:27 log.go:33: [restful/swagger] listing is available at https://172.17.0.2:38589/swaggerapi
I1207 05:59:04.275] [restful] 2018/12/07 05:51:27 log.go:33: [restful/swagger] https://172.17.0.2:38589/swaggerui/ is mapped to folder /swagger-ui/
I1207 05:59:04.275] [restful] 2018/12/07 05:51:37 log.go:33: [restful/swagger] listing is available at https://172.17.0.2:37831/swaggerapi
I1207 05:59:04.275] [restful] 2018/12/07 05:51:37 log.go:33: [restful/swagger] https://172.17.0.2:37831/swaggerui/ is mapped to folder /swagger-ui/
I1207 05:59:04.276] ok  	k8s.io/kubernetes/test/integration/examples	19.883s
I1207 05:59:04.276] [restful] 2018/12/07 05:51:27 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:45753/swaggerapi
... skipping 171 lines ...
I1207 06:01:52.846] [restful] 2018/12/07 05:54:33 log.go:33: [restful/swagger] https://127.0.0.1:37361/swaggerui/ is mapped to folder /swagger-ui/
I1207 06:01:52.846] ok  	k8s.io/kubernetes/test/integration/tls	12.715s
I1207 06:01:52.846] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.255s
I1207 06:01:52.846] ok  	k8s.io/kubernetes/test/integration/volume	93.324s
I1207 06:01:52.846] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	141.952s
I1207 06:01:54.301] +++ [1207 06:01:54] Saved JUnit XML test report to /workspace/artifacts/junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-054855.xml
I1207 06:01:54.304] Makefile:184: recipe for target 'test' failed
I1207 06:01:54.315] +++ [1207 06:01:54] Cleaning up etcd
W1207 06:01:54.416] make[1]: *** [test] Error 1
W1207 06:01:54.416] !!! [1207 06:01:54] Call tree:
W1207 06:01:54.416] !!! [1207 06:01:54]  1: hack/make-rules/test-integration.sh:105 runTests(...)
W1207 06:01:54.499] make: *** [test-integration] Error 1
I1207 06:01:54.599] +++ [1207 06:01:54] Integration test cleanup complete
I1207 06:01:54.599] Makefile:203: recipe for target 'test-integration' failed
W1207 06:01:55.578] Traceback (most recent call last):
W1207 06:01:55.578]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 167, in <module>
W1207 06:01:55.578]     main(ARGS.branch, ARGS.script, ARGS.force, ARGS.prow)
W1207 06:01:55.578]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 136, in main
W1207 06:01:55.578]     check(*cmd)
W1207 06:01:55.578]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W1207 06:01:55.578]     subprocess.check_call(cmd)
W1207 06:01:55.579]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1207 06:01:55.588]     raise CalledProcessError(retcode, cmd)
W1207 06:01:55.589] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181105-ceed87206', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E1207 06:01:55.595] Command failed
I1207 06:01:55.595] process 694 exited with code 1 after 25.9m
E1207 06:01:55.595] FAIL: pull-kubernetes-integration
I1207 06:01:55.596] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1207 06:01:56.072] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1207 06:01:56.136] process 123111 exited with code 0 after 0.0m
I1207 06:01:56.136] Call:  gcloud config get-value account
I1207 06:01:56.401] process 123124 exited with code 0 after 0.0m
I1207 06:01:56.401] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 06:01:56.401] Upload result and artifacts...
I1207 06:01:56.401] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/71684/pull-kubernetes-integration/37829
I1207 06:01:56.402] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/71684/pull-kubernetes-integration/37829/artifacts
W1207 06:01:58.244] CommandException: One or more URLs matched no objects.
E1207 06:01:58.468] Command failed
I1207 06:01:58.468] process 123137 exited with code 1 after 0.0m
W1207 06:01:58.468] Remote dir gs://kubernetes-jenkins/pr-logs/pull/71684/pull-kubernetes-integration/37829/artifacts not exist yet
I1207 06:01:58.468] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/71684/pull-kubernetes-integration/37829/artifacts
I1207 06:02:02.233] process 123282 exited with code 0 after 0.1m
W1207 06:02:02.234] metadata path /workspace/_artifacts/metadata.json does not exist
W1207 06:02:02.234] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...