ResultFAILURE
Tests 1 failed / 578 succeeded
Started2018-12-05 20:45
Elapsed26m39s
Versionv1.14.0-alpha.0.852+a0c2788249ae15
Buildergke-prow-default-pool-3c8994a8-xq6g
pod80d0ba9e-f8ce-11e8-b720-0a580a6c02d1
infra-commit267415765
pod80d0ba9e-f8ce-11e8-b720-0a580a6c02d1
repok8s.io/kubernetes
repo-commita0c2788249ae1582d10089e7a34bb54fc6b3879d
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestImageLocality 4.68s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestImageLocality$
I1205 21:05:29.241754  120170 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I1205 21:05:29.241789  120170 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I1205 21:05:29.241800  120170 master.go:228] Using reconciler: 
I1205 21:05:29.243264  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.243287  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.243322  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.243379  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.243772  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.244269  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.244291  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.244319  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.244381  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.244964  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.245065  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.245084  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.245114  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.245163  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.245732  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.245754  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.245783  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.245846  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.246016  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.246479  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.247025  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.247050  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.247080  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.247124  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.247399  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.247826  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.247848  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.247877  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.247990  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.248254  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.248622  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.248661  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.248690  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.248749  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.249452  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.249475  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.249506  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.249568  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.249802  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.250149  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.250179  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.250210  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.250246  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.250251  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.251127  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.251439  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.251464  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.251495  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.251548  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.251903  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.252196  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.252216  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.252246  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.252340  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.252886  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.253029  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.253055  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.253088  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.253136  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.253416  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.253923  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.253949  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.253979  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.254022  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.254249  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.254549  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.254573  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.254601  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.254662  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.255025  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.255360  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.255379  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.255459  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.255500  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.255958  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.255986  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.256001  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.256067  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.256114  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.256349  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.256511  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.256529  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.256568  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.256601  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.256865  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.267715  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.267746  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.267789  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.267842  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.268243  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.268722  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.268744  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.268782  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.268822  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.269194  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.269703  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.269729  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.269762  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.269823  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.270689  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.271277  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.271300  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.271331  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.271381  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.271750  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.272163  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.272205  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.272244  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.272296  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.272711  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.272996  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.273021  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.273058  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.273152  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.273445  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.273792  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.273815  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.273845  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.273897  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.274352  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.274747  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.274769  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.274801  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.274863  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.275271  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.275504  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.275528  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.275569  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.275674  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.276145  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.276445  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.276470  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.276503  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.276555  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.277107  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.277472  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.277495  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.277527  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.277577  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.278135  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.278160  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.278189  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.278204  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.278246  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.278537  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.278942  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.278963  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.278999  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.279035  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.279361  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.279742  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.279840  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.279882  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.279940  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.280528  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.280553  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.280583  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.280703  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.280766  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.281190  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.281445  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.281760  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.281801  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.281901  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.282546  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.283036  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.283059  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.283086  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.283126  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.283513  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.283752  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.283775  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.283807  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.283855  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.284141  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.284483  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.284505  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.284685  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.284739  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.285125  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.285302  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.285319  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.285347  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.285582  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.285953  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.286294  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.286319  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.286348  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.286389  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.286745  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.286893  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.286929  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.286958  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.287002  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.287357  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.287693  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.287713  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.287845  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.287887  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.288288  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.288619  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.288655  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.288684  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.288720  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.289128  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.289347  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.289367  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.289396  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.289439  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.289805  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.291835  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.291861  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.291892  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.291957  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.292307  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.292714  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.292736  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.292768  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.292821  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.293302  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.293327  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.293358  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.293435  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.293522  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.294075  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.294720  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.294743  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.294774  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.294822  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.295130  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.295278  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.295299  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.295326  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.295362  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.295794  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.295989  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.296012  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.296059  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.296107  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.296375  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.296876  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.296898  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.296938  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.296979  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.297202  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.297559  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.297612  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.297684  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.297785  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.298335  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.298393  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.298409  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.298439  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.298493  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.298843  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.299432  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.299456  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.299487  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.299534  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.299872  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.300371  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.300392  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.300423  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.300458  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.300768  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.301130  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.301366  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.301426  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.301500  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.301884  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.302334  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.302357  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.302389  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.302425  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.302674  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.303032  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.303049  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.303079  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.303127  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.303843  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.304256  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.304277  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.304307  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.304356  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.304574  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.304989  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.305063  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.305107  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.305173  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.305576  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.305800  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.305822  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.305856  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.306134  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.306367  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.306771  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.306791  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.306820  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.306872  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.307421  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.307444  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.307473  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.307527  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.307652  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.307909  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.308182  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.308203  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.308234  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.308401  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.308824  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.309195  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:29.309217  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:29.309266  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:29.309311  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:29.309829  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 21:05:29.316083  120170 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W1205 21:05:29.329450  120170 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1205 21:05:29.330069  120170 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1205 21:05:29.332252  120170 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1205 21:05:29.345024  120170 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I1205 21:05:30.241521  120170 clientconn.go:551] parsed scheme: ""
I1205 21:05:30.241554  120170 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1205 21:05:30.241611  120170 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1205 21:05:30.241685  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:30.242049  120170 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1205 21:05:30.350991  120170 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I1205 21:05:30.353404  120170 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I1205 21:05:30.353473  120170 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I1205 21:05:30.360714  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1205 21:05:30.363454  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1205 21:05:30.366080  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1205 21:05:30.368815  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I1205 21:05:30.371383  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I1205 21:05:30.374419  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I1205 21:05:30.377253  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1205 21:05:30.380330  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1205 21:05:30.383361  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1205 21:05:30.386193  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1205 21:05:30.389004  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I1205 21:05:30.391609  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1205 21:05:30.394398  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1205 21:05:30.397068  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1205 21:05:30.399852  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1205 21:05:30.402344  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1205 21:05:30.404926  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1205 21:05:30.407610  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1205 21:05:30.410469  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1205 21:05:30.413164  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1205 21:05:30.415854  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1205 21:05:30.418503  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1205 21:05:30.421058  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I1205 21:05:30.423759  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1205 21:05:30.426600  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1205 21:05:30.429165  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1205 21:05:30.431847  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1205 21:05:30.434867  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1205 21:05:30.437448  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1205 21:05:30.440384  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1205 21:05:30.443297  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1205 21:05:30.445814  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1205 21:05:30.448564  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1205 21:05:30.451218  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1205 21:05:30.453891  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1205 21:05:30.456569  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1205 21:05:30.459473  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1205 21:05:30.462237  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1205 21:05:30.465033  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1205 21:05:30.467782  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1205 21:05:30.470627  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1205 21:05:30.473321  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1205 21:05:30.476358  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1205 21:05:30.479034  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1205 21:05:30.481663  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1205 21:05:30.484546  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1205 21:05:30.487224  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1205 21:05:30.489786  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1205 21:05:30.492678  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1205 21:05:30.509865  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1205 21:05:30.549890  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1205 21:05:30.590221  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1205 21:05:30.630230  120170 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1205 21:05:30.670144  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1205 21:05:30.709747  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1205 21:05:30.763333  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1205 21:05:30.790276  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1205 21:05:30.830352  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1205 21:05:30.870134  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1205 21:05:30.910208  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1205 21:05:30.949933  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I1205 21:05:30.989959  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1205 21:05:31.029788  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1205 21:05:31.070091  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1205 21:05:31.110207  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1205 21:05:31.150112  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1205 21:05:31.190036  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1205 21:05:31.230137  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1205 21:05:31.269897  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1205 21:05:31.310183  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1205 21:05:31.349941  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1205 21:05:31.389869  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1205 21:05:31.430043  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1205 21:05:31.470730  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1205 21:05:31.510226  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1205 21:05:31.550197  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1205 21:05:31.590031  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1205 21:05:31.630135  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1205 21:05:31.670131  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1205 21:05:31.710159  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1205 21:05:31.750347  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1205 21:05:31.790390  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1205 21:05:31.830480  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1205 21:05:31.870267  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1205 21:05:31.910229  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1205 21:05:31.950150  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1205 21:05:31.989953  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1205 21:05:32.030187  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1205 21:05:32.069981  120170 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1205 21:05:32.110145  120170 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1205 21:05:32.150259  120170 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1205 21:05:32.190943  120170 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1205 21:05:32.229944  120170 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1205 21:05:32.270097  120170 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1205 21:05:32.310173  120170 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1205 21:05:32.350056  120170 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1205 21:05:32.390288  120170 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1205 21:05:32.430173  120170 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1205 21:05:32.470360  120170 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1205 21:05:32.511107  120170 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1205 21:05:32.568818  120170 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1205 21:05:32.590225  120170 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W1205 21:05:32.651003  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1205 21:05:32.651070  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1205 21:05:32.651095  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1205 21:05:32.651110  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1205 21:05:32.651123  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1205 21:05:32.651138  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1205 21:05:32.651152  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1205 21:05:32.651172  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1205 21:05:32.651182  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1205 21:05:32.651199  120170 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I1205 21:05:32.651399  120170 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I1205 21:05:32.751628  120170 controller_utils.go:1034] Caches are synced for scheduler controller
E1205 21:05:33.887828  120170 factory.go:1352] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I1205 21:05:33.913429  120170 controller.go:170] Shutting down kubernetes service endpoint reconciler
				from junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181205-205922.xml

Find from mentions in log files | View test history on testgrid


Show 578 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1205 20:45:09.017] process 232 exited with code 0 after 0.1m
I1205 20:45:09.018] Call:  gcloud config get-value account
I1205 20:45:09.348] process 245 exited with code 0 after 0.0m
I1205 20:45:09.348] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1205 20:45:09.348] Call:  kubectl get -oyaml pods/80d0ba9e-f8ce-11e8-b720-0a580a6c02d1
W1205 20:45:11.245] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1205 20:45:11.249] Command failed
I1205 20:45:11.249] process 258 exited with code 1 after 0.0m
E1205 20:45:11.249] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/80d0ba9e-f8ce-11e8-b720-0a580a6c02d1']' returned non-zero exit status 1
I1205 20:45:11.249] Root: /workspace
I1205 20:45:11.250] cd to /workspace
I1205 20:45:11.250] Checkout: /workspace/k8s.io/kubernetes master to /workspace/k8s.io/kubernetes
I1205 20:45:11.250] Call:  git init k8s.io/kubernetes
... skipping 808 lines ...
W1205 20:54:29.766] I1205 20:54:29.763081   55589 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W1205 20:54:29.766] I1205 20:54:29.763124   55589 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W1205 20:54:29.766] I1205 20:54:29.763196   55589 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
W1205 20:54:29.766] I1205 20:54:29.763294   55589 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
W1205 20:54:29.767] I1205 20:54:29.763388   55589 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W1205 20:54:29.767] I1205 20:54:29.763441   55589 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W1205 20:54:29.767] E1205 20:54:29.763470   55589 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1205 20:54:29.768] I1205 20:54:29.763497   55589 controllermanager.go:516] Started "resourcequota"
W1205 20:54:29.768] I1205 20:54:29.763727   55589 resource_quota_controller.go:276] Starting resource quota controller
W1205 20:54:29.768] I1205 20:54:29.763802   55589 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W1205 20:54:29.769] I1205 20:54:29.763877   55589 resource_quota_monitor.go:301] QuotaMonitor running
W1205 20:54:29.769] I1205 20:54:29.765487   55589 controllermanager.go:516] Started "disruption"
W1205 20:54:29.769] I1205 20:54:29.766216   55589 controllermanager.go:516] Started "endpoint"
... skipping 36 lines ...
W1205 20:54:29.780] I1205 20:54:29.780126   55589 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller
W1205 20:54:29.780] I1205 20:54:29.780682   55589 controllermanager.go:516] Started "job"
W1205 20:54:29.781] W1205 20:54:29.780697   55589 controllermanager.go:495] "bootstrapsigner" is disabled
W1205 20:54:29.781] W1205 20:54:29.780705   55589 controllermanager.go:508] Skipping "nodeipam"
W1205 20:54:29.781] I1205 20:54:29.780759   55589 job_controller.go:143] Starting job controller
W1205 20:54:29.781] I1205 20:54:29.780777   55589 controller_utils.go:1027] Waiting for caches to sync for job controller
W1205 20:54:29.782] E1205 20:54:29.781927   55589 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1205 20:54:29.782] W1205 20:54:29.781948   55589 controllermanager.go:508] Skipping "service"
W1205 20:54:29.783] W1205 20:54:29.782693   55589 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W1205 20:54:29.783] I1205 20:54:29.783399   55589 controllermanager.go:516] Started "attachdetach"
W1205 20:54:29.784] I1205 20:54:29.784206   55589 controllermanager.go:516] Started "pvc-protection"
W1205 20:54:29.784] I1205 20:54:29.784417   55589 attach_detach_controller.go:315] Starting attach detach controller
W1205 20:54:29.784] I1205 20:54:29.784443   55589 controller_utils.go:1027] Waiting for caches to sync for attach detach controller
... skipping 8 lines ...
W1205 20:54:29.787] I1205 20:54:29.787629   55589 controllermanager.go:516] Started "serviceaccount"
W1205 20:54:29.788] W1205 20:54:29.787683   55589 controllermanager.go:508] Skipping "csrsigning"
W1205 20:54:29.788] I1205 20:54:29.787779   55589 serviceaccounts_controller.go:115] Starting service account controller
W1205 20:54:29.788] I1205 20:54:29.787790   55589 controller_utils.go:1027] Waiting for caches to sync for service account controller
W1205 20:54:29.788] I1205 20:54:29.788053   55589 controllermanager.go:516] Started "ttl"
W1205 20:54:29.788] W1205 20:54:29.788071   55589 controllermanager.go:508] Skipping "ttl-after-finished"
W1205 20:54:29.789] W1205 20:54:29.788324   55589 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W1205 20:54:29.789] I1205 20:54:29.788816   55589 controllermanager.go:516] Started "garbagecollector"
W1205 20:54:29.789] I1205 20:54:29.789451   55589 controllermanager.go:516] Started "daemonset"
W1205 20:54:29.789] I1205 20:54:29.789728   55589 controllermanager.go:516] Started "csrcleaner"
W1205 20:54:29.790] I1205 20:54:29.790047   55589 node_lifecycle_controller.go:272] Sending events to api server.
W1205 20:54:29.790] I1205 20:54:29.790126   55589 node_lifecycle_controller.go:312] Controller is using taint based evictions.
W1205 20:54:29.790] I1205 20:54:29.790194   55589 taint_manager.go:175] Sending events to api server.
... skipping 17 lines ...
W1205 20:54:29.870] I1205 20:54:29.870339   55589 controller_utils.go:1034] Caches are synced for expand controller
W1205 20:54:29.871] I1205 20:54:29.870978   55589 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W1205 20:54:29.871] I1205 20:54:29.871495   55589 controller_utils.go:1034] Caches are synced for stateful set controller
W1205 20:54:29.878] I1205 20:54:29.877802   55589 controller_utils.go:1034] Caches are synced for namespace controller
W1205 20:54:29.878] I1205 20:54:29.878415   55589 controller_utils.go:1034] Caches are synced for deployment controller
W1205 20:54:29.879] I1205 20:54:29.879153   55589 controller_utils.go:1034] Caches are synced for certificate controller
W1205 20:54:29.883] E1205 20:54:29.883318   55589 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1205 20:54:29.884] I1205 20:54:29.884575   55589 controller_utils.go:1034] Caches are synced for attach detach controller
W1205 20:54:29.885] I1205 20:54:29.884856   55589 controller_utils.go:1034] Caches are synced for PVC protection controller
W1205 20:54:29.885] I1205 20:54:29.885429   55589 controller_utils.go:1034] Caches are synced for PV protection controller
W1205 20:54:29.887] I1205 20:54:29.887188   55589 controller_utils.go:1034] Caches are synced for ReplicationController controller
W1205 20:54:29.888] I1205 20:54:29.887941   55589 controller_utils.go:1034] Caches are synced for service account controller
W1205 20:54:29.890] I1205 20:54:29.889858   52228 controller.go:608] quota admission added evaluator for: serviceaccounts
W1205 20:54:29.891] I1205 20:54:29.890870   55589 controller_utils.go:1034] Caches are synced for TTL controller
W1205 20:54:29.892] I1205 20:54:29.892278   55589 controller_utils.go:1034] Caches are synced for taint controller
W1205 20:54:29.892] I1205 20:54:29.892293   55589 controller_utils.go:1034] Caches are synced for daemon sets controller
W1205 20:54:29.893] I1205 20:54:29.892428   55589 taint_manager.go:198] Starting NoExecuteTaintManager
W1205 20:54:29.967] W1205 20:54:29.966830   55589 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1205 20:54:30.065] I1205 20:54:30.064732   55589 controller_utils.go:1034] Caches are synced for resource quota controller
W1205 20:54:30.068] I1205 20:54:30.067825   55589 controller_utils.go:1034] Caches are synced for endpoint controller
W1205 20:54:30.081] I1205 20:54:30.080958   55589 controller_utils.go:1034] Caches are synced for job controller
W1205 20:54:30.180] I1205 20:54:30.180345   55589 controller_utils.go:1034] Caches are synced for persistent volume controller
I1205 20:54:30.281] +++ [1205 20:54:29] On try 3, controller-manager: ok
I1205 20:54:30.282] node/127.0.0.1 created
... skipping 28 lines ...
I1205 20:54:31.057] Successful: --output json has correct server info
I1205 20:54:31.060] +++ [1205 20:54:31] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I1205 20:54:31.237] Successful: --client --output json has correct client info
I1205 20:54:31.273] Successful: --client --output json has no server info
I1205 20:54:31.276] +++ [1205 20:54:31] Testing kubectl version: compare json output using additional --short flag
W1205 20:54:31.377] I1205 20:54:31.245693   55589 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1205 20:54:31.378] E1205 20:54:31.258182   55589 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1205 20:54:31.378] I1205 20:54:31.290882   55589 controller_utils.go:1034] Caches are synced for garbage collector controller
W1205 20:54:31.378] I1205 20:54:31.291428   55589 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W1205 20:54:31.379] I1205 20:54:31.346005   55589 controller_utils.go:1034] Caches are synced for garbage collector controller
I1205 20:54:31.479] Successful: --short --output client json info is equal to non short result
I1205 20:54:31.480] Successful: --short --output server json info is equal to non short result
I1205 20:54:31.480] +++ [1205 20:54:31] Testing kubectl version: compare json output with yaml output
... skipping 45 lines ...
I1205 20:54:34.178] +++ working dir: /go/src/k8s.io/kubernetes
I1205 20:54:34.180] +++ command: run_RESTMapper_evaluation_tests
I1205 20:54:34.190] +++ [1205 20:54:34] Creating namespace namespace-1544043274-25925
I1205 20:54:34.256] namespace/namespace-1544043274-25925 created
I1205 20:54:34.315] Context "test" modified.
I1205 20:54:34.319] +++ [1205 20:54:34] Testing RESTMapper
I1205 20:54:34.415] +++ [1205 20:54:34] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1205 20:54:34.427] +++ exit code: 0
I1205 20:54:34.527] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1205 20:54:34.527] bindings                                                                      true         Binding
I1205 20:54:34.527] componentstatuses                 cs                                          false        ComponentStatus
I1205 20:54:34.527] configmaps                        cm                                          true         ConfigMap
I1205 20:54:34.527] endpoints                         ep                                          true         Endpoints
... skipping 609 lines ...
I1205 20:54:52.696] poddisruptionbudget.policy/test-pdb-3 created
I1205 20:54:52.779] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1205 20:54:52.846] poddisruptionbudget.policy/test-pdb-4 created
I1205 20:54:52.938] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1205 20:54:53.086] core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:54:53.253] pod/env-test-pod created
W1205 20:54:53.354] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1205 20:54:53.354] error: setting 'all' parameter but found a non empty selector. 
W1205 20:54:53.355] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1205 20:54:53.355] I1205 20:54:52.392790   52228 controller.go:608] quota admission added evaluator for: poddisruptionbudgets.policy
W1205 20:54:53.355] error: min-available and max-unavailable cannot be both specified
I1205 20:54:53.455] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I1205 20:54:53.456] Name:               env-test-pod
I1205 20:54:53.456] Namespace:          test-kubectl-describe-pod
I1205 20:54:53.456] Priority:           0
I1205 20:54:53.456] PriorityClassName:  <none>
I1205 20:54:53.456] Node:               <none>
... skipping 145 lines ...
W1205 20:55:04.990] I1205 20:55:04.139868   55589 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W1205 20:55:04.990] I1205 20:55:04.567393   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043300-11832", Name:"modified", UID:"0a5df44d-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-258j6
I1205 20:55:05.133] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:05.278] pod/valid-pod created
I1205 20:55:05.370] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1205 20:55:05.516] Successful
I1205 20:55:05.516] message:Error from server: cannot restore map from string
I1205 20:55:05.516] has:cannot restore map from string
I1205 20:55:05.598] Successful
I1205 20:55:05.598] message:pod/valid-pod patched (no change)
I1205 20:55:05.598] has:patched (no change)
I1205 20:55:05.680] pod/valid-pod patched
I1205 20:55:05.768] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I1205 20:55:06.278] pod/valid-pod patched
I1205 20:55:06.367] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1205 20:55:06.438] pod/valid-pod patched
I1205 20:55:06.526] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1205 20:55:06.682] pod/valid-pod patched
I1205 20:55:06.774] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1205 20:55:06.940] +++ [1205 20:55:06] "kubectl patch with resourceVersion 488" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W1205 20:55:07.041] E1205 20:55:05.510139   52228 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I1205 20:55:07.164] pod "valid-pod" deleted
I1205 20:55:07.176] pod/valid-pod replaced
I1205 20:55:07.264] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1205 20:55:07.416] Successful
I1205 20:55:07.416] message:error: --grace-period must have --force specified
I1205 20:55:07.416] has:\-\-grace-period must have \-\-force specified
I1205 20:55:07.565] Successful
I1205 20:55:07.565] message:error: --timeout must have --force specified
I1205 20:55:07.565] has:\-\-timeout must have \-\-force specified
W1205 20:55:07.710] W1205 20:55:07.710112   55589 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1205 20:55:07.811] node/node-v1-test created
I1205 20:55:07.872] node/node-v1-test replaced
I1205 20:55:07.969] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1205 20:55:08.045] node "node-v1-test" deleted
I1205 20:55:08.137] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1205 20:55:08.397] core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 58 lines ...
I1205 20:55:13.444] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:13.610] pod/test-pod created
W1205 20:55:13.711] Edit cancelled, no changes made.
W1205 20:55:13.711] Edit cancelled, no changes made.
W1205 20:55:13.711] Edit cancelled, no changes made.
W1205 20:55:13.712] Edit cancelled, no changes made.
W1205 20:55:13.712] error: 'name' already has a value (valid-pod), and --overwrite is false
W1205 20:55:13.712] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1205 20:55:13.712] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1205 20:55:13.812] pod "test-pod" deleted
I1205 20:55:13.813] +++ [1205 20:55:13] Creating namespace namespace-1544043313-18639
I1205 20:55:13.876] namespace/namespace-1544043313-18639 created
I1205 20:55:13.947] Context "test" modified.
... skipping 41 lines ...
I1205 20:55:17.196] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1205 20:55:17.199] +++ working dir: /go/src/k8s.io/kubernetes
I1205 20:55:17.201] +++ command: run_kubectl_create_error_tests
I1205 20:55:17.213] +++ [1205 20:55:17] Creating namespace namespace-1544043317-23973
I1205 20:55:17.281] namespace/namespace-1544043317-23973 created
I1205 20:55:17.345] Context "test" modified.
I1205 20:55:17.350] +++ [1205 20:55:17] Testing kubectl create with error
W1205 20:55:17.451] Error: required flag(s) "filename" not set
W1205 20:55:17.451] 
W1205 20:55:17.451] 
W1205 20:55:17.451] Examples:
W1205 20:55:17.451]   # Create a pod using the data in pod.json.
W1205 20:55:17.451]   kubectl create -f ./pod.json
W1205 20:55:17.451]   
... skipping 38 lines ...
W1205 20:55:17.457]   kubectl create -f FILENAME [options]
W1205 20:55:17.457] 
W1205 20:55:17.457] Use "kubectl <command> --help" for more information about a given command.
W1205 20:55:17.457] Use "kubectl options" for a list of global command-line options (applies to all commands).
W1205 20:55:17.457] 
W1205 20:55:17.457] required flag(s) "filename" not set
I1205 20:55:17.558] +++ [1205 20:55:17] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1205 20:55:17.658] kubectl convert is DEPRECATED and will be removed in a future version.
W1205 20:55:17.659] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1205 20:55:17.759] +++ exit code: 0
I1205 20:55:17.759] Recording: run_kubectl_apply_tests
I1205 20:55:17.759] Running command: run_kubectl_apply_tests
I1205 20:55:17.769] 
... skipping 13 lines ...
I1205 20:55:18.763] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I1205 20:55:19.641] deployment.extensions "test-deployment-retainkeys" deleted
I1205 20:55:19.766] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:19.887] pod/selector-test-pod created
I1205 20:55:19.980] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1205 20:55:20.061] Successful
I1205 20:55:20.061] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1205 20:55:20.061] has:pods "selector-test-pod-dont-apply" not found
I1205 20:55:20.137] pod "selector-test-pod" deleted
I1205 20:55:20.228] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:20.444] pod/test-pod created (server dry run)
I1205 20:55:20.541] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:20.694] pod/test-pod created
... skipping 12 lines ...
W1205 20:55:21.417] I1205 20:55:21.416943   52228 clientconn.go:551] parsed scheme: ""
W1205 20:55:21.417] I1205 20:55:21.416975   52228 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1205 20:55:21.418] I1205 20:55:21.417020   52228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1205 20:55:21.418] I1205 20:55:21.417061   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:55:21.418] I1205 20:55:21.417387   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:55:21.484] I1205 20:55:21.484255   52228 controller.go:608] quota admission added evaluator for: resources.mygroup.example.com
W1205 20:55:21.564] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1205 20:55:21.665] kind.mygroup.example.com/myobj created (server dry run)
I1205 20:55:21.665] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1205 20:55:21.735] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:21.887] pod/a created
I1205 20:55:23.382] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I1205 20:55:23.461] Successful
I1205 20:55:23.461] message:Error from server (NotFound): pods "b" not found
I1205 20:55:23.462] has:pods "b" not found
I1205 20:55:23.615] pod/b created
I1205 20:55:23.632] pod/a pruned
I1205 20:55:25.320] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I1205 20:55:25.417] Successful
I1205 20:55:25.418] message:Error from server (NotFound): pods "a" not found
I1205 20:55:25.418] has:pods "a" not found
I1205 20:55:25.509] pod "b" deleted
I1205 20:55:25.625] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:25.803] pod/a created
I1205 20:55:25.916] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I1205 20:55:26.009] Successful
I1205 20:55:26.009] message:Error from server (NotFound): pods "b" not found
I1205 20:55:26.009] has:pods "b" not found
I1205 20:55:26.174] pod/b created
I1205 20:55:26.270] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I1205 20:55:26.368] apply.sh:166: Successful get pods b {{.metadata.name}}: b
I1205 20:55:26.453] pod "a" deleted
I1205 20:55:26.458] pod "b" deleted
I1205 20:55:26.650] Successful
I1205 20:55:26.651] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I1205 20:55:26.651] has:all resources selected for prune without explicitly passing --all
I1205 20:55:26.808] pod/a created
I1205 20:55:26.815] pod/b created
I1205 20:55:26.825] service/prune-svc created
I1205 20:55:28.314] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I1205 20:55:28.395] apply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 127 lines ...
I1205 20:55:40.329] Context "test" modified.
I1205 20:55:40.334] +++ [1205 20:55:40] Testing kubectl create filter
I1205 20:55:40.418] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:40.563] pod/selector-test-pod created
I1205 20:55:40.657] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1205 20:55:40.738] Successful
I1205 20:55:40.738] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1205 20:55:40.738] has:pods "selector-test-pod-dont-apply" not found
I1205 20:55:40.813] pod "selector-test-pod" deleted
I1205 20:55:40.829] +++ exit code: 0
I1205 20:55:40.861] Recording: run_kubectl_apply_deployments_tests
I1205 20:55:40.861] Running command: run_kubectl_apply_deployments_tests
I1205 20:55:40.882] 
... skipping 45 lines ...
I1205 20:55:42.752] apps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:42.821] apps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:42.902] apps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:43.055] deployment.extensions/nginx created
I1205 20:55:43.151] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I1205 20:55:47.348] Successful
I1205 20:55:47.348] message:Error from server (Conflict): error when applying patch:
I1205 20:55:47.349] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544043340-6004\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1205 20:55:47.349] to:
I1205 20:55:47.349] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I1205 20:55:47.349] Name: "nginx", Namespace: "namespace-1544043340-6004"
I1205 20:55:47.351] Object: &{map["status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["lastUpdateTime":"2018-12-05T20:55:43Z" "lastTransitionTime":"2018-12-05T20:55:43Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False"]]] "kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1544043340-6004/deployments/nginx" "uid":"214f32a8-f8d0-11e8-83ce-0242ac110002" "resourceVersion":"706" "generation":'\x01' "creationTimestamp":"2018-12-05T20:55:43Z" "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544043340-6004\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "name":"nginx" "namespace":"namespace-1544043340-6004" "labels":map["name":"nginx"]] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["spec":map["containers":[map["resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]]]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"] "metadata":map["labels":map["name":"nginx1"] "creationTimestamp":<nil>]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647)]]}
I1205 20:55:47.351] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I1205 20:55:47.351] has:Error from server (Conflict)
W1205 20:55:47.452] I1205 20:55:43.058912   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043340-6004", Name:"nginx", UID:"214f32a8-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W1205 20:55:47.452] I1205 20:55:43.061626   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043340-6004", Name:"nginx-5d56d6b95f", UID:"214fcaf0-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-z59z8
W1205 20:55:47.453] I1205 20:55:43.063501   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043340-6004", Name:"nginx-5d56d6b95f", UID:"214fcaf0-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-6rvcn
W1205 20:55:47.453] I1205 20:55:43.064334   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043340-6004", Name:"nginx-5d56d6b95f", UID:"214fcaf0-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-rw56c
W1205 20:55:51.562] E1205 20:55:51.561872   55589 replica_set.go:450] Sync "namespace-1544043340-6004/nginx-5d56d6b95f" failed with replicasets.apps "nginx-5d56d6b95f" not found
I1205 20:55:52.543] deployment.extensions/nginx configured
I1205 20:55:52.631] Successful
I1205 20:55:52.631] message:        "name": "nginx2"
I1205 20:55:52.631]           "name": "nginx2"
I1205 20:55:52.631] has:"name": "nginx2"
W1205 20:55:52.732] I1205 20:55:52.546584   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043340-6004", Name:"nginx", UID:"26f6e676-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
... skipping 82 lines ...
I1205 20:55:59.132] +++ [1205 20:55:59] Creating namespace namespace-1544043359-6881
I1205 20:55:59.198] namespace/namespace-1544043359-6881 created
I1205 20:55:59.262] Context "test" modified.
I1205 20:55:59.267] +++ [1205 20:55:59] Testing kubectl get
I1205 20:55:59.352] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:59.435] Successful
I1205 20:55:59.435] message:Error from server (NotFound): pods "abc" not found
I1205 20:55:59.435] has:pods "abc" not found
I1205 20:55:59.517] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:59.596] Successful
I1205 20:55:59.597] message:Error from server (NotFound): pods "abc" not found
I1205 20:55:59.597] has:pods "abc" not found
I1205 20:55:59.681] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:55:59.757] Successful
I1205 20:55:59.758] message:{
I1205 20:55:59.758]     "apiVersion": "v1",
I1205 20:55:59.758]     "items": [],
... skipping 23 lines ...
I1205 20:56:00.071] has not:No resources found
I1205 20:56:00.147] Successful
I1205 20:56:00.147] message:NAME
I1205 20:56:00.147] has not:No resources found
I1205 20:56:00.231] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:00.338] Successful
I1205 20:56:00.338] message:error: the server doesn't have a resource type "foobar"
I1205 20:56:00.338] has not:No resources found
I1205 20:56:00.415] Successful
I1205 20:56:00.415] message:No resources found.
I1205 20:56:00.416] has:No resources found
I1205 20:56:00.493] Successful
I1205 20:56:00.493] message:
I1205 20:56:00.493] has not:No resources found
I1205 20:56:00.573] Successful
I1205 20:56:00.573] message:No resources found.
I1205 20:56:00.573] has:No resources found
I1205 20:56:00.657] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:00.735] Successful
I1205 20:56:00.735] message:Error from server (NotFound): pods "abc" not found
I1205 20:56:00.735] has:pods "abc" not found
I1205 20:56:00.737] FAIL!
I1205 20:56:00.737] message:Error from server (NotFound): pods "abc" not found
I1205 20:56:00.737] has not:List
I1205 20:56:00.737] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1205 20:56:00.852] Successful
I1205 20:56:00.852] message:I1205 20:56:00.799910   67705 loader.go:359] Config loaded from file /tmp/tmp.cOKIKU7Wcn/.kube/config
I1205 20:56:00.853] I1205 20:56:00.800489   67705 loader.go:359] Config loaded from file /tmp/tmp.cOKIKU7Wcn/.kube/config
I1205 20:56:00.853] I1205 20:56:00.801889   67705 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I1205 20:56:04.240] }
I1205 20:56:04.323] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1205 20:56:04.545] <no value>Successful
I1205 20:56:04.545] message:valid-pod:
I1205 20:56:04.545] has:valid-pod:
I1205 20:56:04.623] Successful
I1205 20:56:04.623] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1205 20:56:04.623] 	template was:
I1205 20:56:04.623] 		{.missing}
I1205 20:56:04.624] 	object given to jsonpath engine was:
I1205 20:56:04.624] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"valid-pod", "namespace":"namespace-1544043363-19776", "selfLink":"/api/v1/namespaces/namespace-1544043363-19776/pods/valid-pod", "uid":"2de32482-f8d0-11e8-83ce-0242ac110002", "resourceVersion":"801", "creationTimestamp":"2018-12-05T20:56:04Z", "labels":map[string]interface {}{"name":"valid-pod"}}, "spec":map[string]interface {}{"schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"memory":"512Mi", "cpu":"1"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I1205 20:56:04.624] has:missing is not found
I1205 20:56:04.701] Successful
I1205 20:56:04.701] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1205 20:56:04.702] 	template was:
I1205 20:56:04.702] 		{{.missing}}
I1205 20:56:04.702] 	raw data was:
I1205 20:56:04.703] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2018-12-05T20:56:04Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1544043363-19776","resourceVersion":"801","selfLink":"/api/v1/namespaces/namespace-1544043363-19776/pods/valid-pod","uid":"2de32482-f8d0-11e8-83ce-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1205 20:56:04.703] 	object given to template engine was:
I1205 20:56:04.703] 		map[spec:map[securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler] status:map[phase:Pending qosClass:Guaranteed] apiVersion:v1 kind:Pod metadata:map[labels:map[name:valid-pod] name:valid-pod namespace:namespace-1544043363-19776 resourceVersion:801 selfLink:/api/v1/namespaces/namespace-1544043363-19776/pods/valid-pod uid:2de32482-f8d0-11e8-83ce-0242ac110002 creationTimestamp:2018-12-05T20:56:04Z]]
I1205 20:56:04.703] has:map has no entry for key "missing"
W1205 20:56:04.804] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W1205 20:56:05.776] E1205 20:56:05.775281   68093 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I1205 20:56:05.876] Successful
I1205 20:56:05.876] message:NAME        READY   STATUS    RESTARTS   AGE
I1205 20:56:05.877] valid-pod   0/1     Pending   0          0s
I1205 20:56:05.877] has:STATUS
I1205 20:56:05.877] Successful
... skipping 80 lines ...
I1205 20:56:08.048]   terminationGracePeriodSeconds: 30
I1205 20:56:08.048] status:
I1205 20:56:08.048]   phase: Pending
I1205 20:56:08.048]   qosClass: Guaranteed
I1205 20:56:08.048] has:name: valid-pod
I1205 20:56:08.048] Successful
I1205 20:56:08.048] message:Error from server (NotFound): pods "invalid-pod" not found
I1205 20:56:08.049] has:"invalid-pod" not found
I1205 20:56:08.104] pod "valid-pod" deleted
I1205 20:56:08.195] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:08.346] pod/redis-master created
I1205 20:56:08.352] pod/valid-pod created
I1205 20:56:08.437] Successful
... skipping 305 lines ...
I1205 20:56:12.409] Running command: run_create_secret_tests
I1205 20:56:12.428] 
I1205 20:56:12.429] +++ Running case: test-cmd.run_create_secret_tests 
I1205 20:56:12.432] +++ working dir: /go/src/k8s.io/kubernetes
I1205 20:56:12.434] +++ command: run_create_secret_tests
I1205 20:56:12.521] Successful
I1205 20:56:12.521] message:Error from server (NotFound): secrets "mysecret" not found
I1205 20:56:12.521] has:secrets "mysecret" not found
W1205 20:56:12.622] I1205 20:56:11.619704   52228 clientconn.go:551] parsed scheme: ""
W1205 20:56:12.622] I1205 20:56:11.619733   52228 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1205 20:56:12.622] I1205 20:56:11.619772   52228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1205 20:56:12.622] I1205 20:56:11.619804   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:56:12.623] I1205 20:56:11.620224   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:56:12.623] No resources found.
W1205 20:56:12.623] No resources found.
I1205 20:56:12.723] Successful
I1205 20:56:12.723] message:Error from server (NotFound): secrets "mysecret" not found
I1205 20:56:12.724] has:secrets "mysecret" not found
I1205 20:56:12.724] Successful
I1205 20:56:12.724] message:user-specified
I1205 20:56:12.724] has:user-specified
I1205 20:56:12.743] Successful
I1205 20:56:12.815] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"330bf17e-f8d0-11e8-83ce-0242ac110002","resourceVersion":"875","creationTimestamp":"2018-12-05T20:56:12Z"}}
... skipping 80 lines ...
I1205 20:56:14.680] has:Timeout exceeded while reading body
I1205 20:56:14.757] Successful
I1205 20:56:14.758] message:NAME        READY   STATUS    RESTARTS   AGE
I1205 20:56:14.758] valid-pod   0/1     Pending   0          1s
I1205 20:56:14.758] has:valid-pod
I1205 20:56:14.824] Successful
I1205 20:56:14.824] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1205 20:56:14.824] has:Invalid timeout value
I1205 20:56:14.898] pod "valid-pod" deleted
I1205 20:56:14.917] +++ exit code: 0
I1205 20:56:14.950] Recording: run_crd_tests
I1205 20:56:14.950] Running command: run_crd_tests
I1205 20:56:14.969] 
... skipping 166 lines ...
I1205 20:56:18.993] foo.company.com/test patched
I1205 20:56:19.079] crd.sh:237: Successful get foos/test {{.patched}}: value1
I1205 20:56:19.156] foo.company.com/test patched
I1205 20:56:19.237] crd.sh:239: Successful get foos/test {{.patched}}: value2
I1205 20:56:19.316] foo.company.com/test patched
I1205 20:56:19.403] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I1205 20:56:19.546] +++ [1205 20:56:19] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1205 20:56:19.604] {
I1205 20:56:19.605]     "apiVersion": "company.com/v1",
I1205 20:56:19.605]     "kind": "Foo",
I1205 20:56:19.605]     "metadata": {
I1205 20:56:19.605]         "annotations": {
I1205 20:56:19.605]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W1205 20:56:21.092] I1205 20:56:17.448022   52228 controller.go:608] quota admission added evaluator for: foos.company.com
W1205 20:56:21.093] I1205 20:56:20.737843   52228 controller.go:608] quota admission added evaluator for: bars.company.com
W1205 20:56:21.093] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 70615 Killed                  while [ ${tries} -lt 10 ]; do
W1205 20:56:21.093]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W1205 20:56:21.093] done
W1205 20:56:21.093] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 70614 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W1205 20:56:31.379] E1205 20:56:31.378194   55589 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos"]
W1205 20:56:31.569] I1205 20:56:31.569228   55589 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1205 20:56:31.571] I1205 20:56:31.570522   52228 clientconn.go:551] parsed scheme: ""
W1205 20:56:31.571] I1205 20:56:31.570562   52228 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1205 20:56:31.571] I1205 20:56:31.570655   52228 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1205 20:56:31.571] I1205 20:56:31.570711   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:56:31.571] I1205 20:56:31.571065   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 81 lines ...
I1205 20:56:43.446] +++ [1205 20:56:43] Testing cmd with image
I1205 20:56:43.533] Successful
I1205 20:56:43.533] message:deployment.apps/test1 created
I1205 20:56:43.533] has:deployment.apps/test1 created
I1205 20:56:43.609] deployment.extensions "test1" deleted
I1205 20:56:43.682] Successful
I1205 20:56:43.682] message:error: Invalid image name "InvalidImageName": invalid reference format
I1205 20:56:43.683] has:error: Invalid image name "InvalidImageName": invalid reference format
I1205 20:56:43.696] +++ exit code: 0
I1205 20:56:43.729] Recording: run_recursive_resources_tests
I1205 20:56:43.729] Running command: run_recursive_resources_tests
I1205 20:56:43.750] 
I1205 20:56:43.752] +++ Running case: test-cmd.run_recursive_resources_tests 
I1205 20:56:43.754] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I1205 20:56:43.905] Context "test" modified.
I1205 20:56:43.990] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:44.236] generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:44.238] Successful
I1205 20:56:44.238] message:pod/busybox0 created
I1205 20:56:44.238] pod/busybox1 created
I1205 20:56:44.239] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1205 20:56:44.239] has:error validating data: kind not set
I1205 20:56:44.328] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:44.492] generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1205 20:56:44.495] Successful
I1205 20:56:44.495] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:44.495] has:Object 'Kind' is missing
I1205 20:56:44.582] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:44.829] generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1205 20:56:44.831] Successful
I1205 20:56:44.831] message:pod/busybox0 replaced
I1205 20:56:44.832] pod/busybox1 replaced
I1205 20:56:44.832] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1205 20:56:44.832] has:error validating data: kind not set
I1205 20:56:44.919] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:45.008] Successful
I1205 20:56:45.009] message:Name:               busybox0
I1205 20:56:45.009] Namespace:          namespace-1544043403-2023
I1205 20:56:45.009] Priority:           0
I1205 20:56:45.009] PriorityClassName:  <none>
... skipping 159 lines ...
I1205 20:56:45.019] has:Object 'Kind' is missing
I1205 20:56:45.098] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:45.263] generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1205 20:56:45.265] Successful
I1205 20:56:45.265] message:pod/busybox0 annotated
I1205 20:56:45.265] pod/busybox1 annotated
I1205 20:56:45.266] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:45.266] has:Object 'Kind' is missing
I1205 20:56:45.352] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:45.606] generic-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1205 20:56:45.608] Successful
I1205 20:56:45.609] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1205 20:56:45.609] pod/busybox0 configured
I1205 20:56:45.609] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1205 20:56:45.609] pod/busybox1 configured
I1205 20:56:45.609] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1205 20:56:45.610] has:error validating data: kind not set
I1205 20:56:45.693] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:45.840] deployment.extensions/nginx created
I1205 20:56:45.932] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1205 20:56:46.016] generic-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1205 20:56:46.175] generic-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I1205 20:56:46.177] Successful
... skipping 42 lines ...
I1205 20:56:46.252] deployment.extensions "nginx" deleted
I1205 20:56:46.342] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:46.501] generic-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:46.503] Successful
I1205 20:56:46.503] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1205 20:56:46.503] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1205 20:56:46.503] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:46.504] has:Object 'Kind' is missing
I1205 20:56:46.591] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:46.674] Successful
I1205 20:56:46.675] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:46.675] has:busybox0:busybox1:
I1205 20:56:46.677] Successful
I1205 20:56:46.677] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:46.677] has:Object 'Kind' is missing
I1205 20:56:46.765] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:46.850] pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:46.938] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1205 20:56:46.940] Successful
I1205 20:56:46.940] message:pod/busybox0 labeled
I1205 20:56:46.941] pod/busybox1 labeled
I1205 20:56:46.941] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:46.941] has:Object 'Kind' is missing
I1205 20:56:47.028] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:47.110] pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:47.197] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1205 20:56:47.199] Successful
I1205 20:56:47.199] message:pod/busybox0 patched
I1205 20:56:47.200] pod/busybox1 patched
I1205 20:56:47.200] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:47.200] has:Object 'Kind' is missing
I1205 20:56:47.286] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:47.458] generic-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:47.460] Successful
I1205 20:56:47.461] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1205 20:56:47.461] pod "busybox0" force deleted
I1205 20:56:47.461] pod "busybox1" force deleted
I1205 20:56:47.461] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1205 20:56:47.462] has:Object 'Kind' is missing
I1205 20:56:47.547] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:47.689] replicationcontroller/busybox0 created
I1205 20:56:47.693] replicationcontroller/busybox1 created
I1205 20:56:47.787] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:47.872] generic-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:47.954] generic-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I1205 20:56:48.037] generic-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I1205 20:56:48.212] generic-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1205 20:56:48.297] generic-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1205 20:56:48.299] Successful
I1205 20:56:48.299] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1205 20:56:48.300] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1205 20:56:48.300] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:48.300] has:Object 'Kind' is missing
I1205 20:56:48.373] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1205 20:56:48.454] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1205 20:56:48.548] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:48.637] generic-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I1205 20:56:48.728] generic-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I1205 20:56:48.916] generic-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1205 20:56:49.003] generic-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1205 20:56:49.004] Successful
I1205 20:56:49.005] message:service/busybox0 exposed
I1205 20:56:49.005] service/busybox1 exposed
I1205 20:56:49.005] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:49.006] has:Object 'Kind' is missing
I1205 20:56:49.096] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:49.184] generic-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I1205 20:56:49.271] generic-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I1205 20:56:49.475] generic-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I1205 20:56:49.561] generic-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I1205 20:56:49.563] Successful
I1205 20:56:49.563] message:replicationcontroller/busybox0 scaled
I1205 20:56:49.563] replicationcontroller/busybox1 scaled
I1205 20:56:49.563] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:49.564] has:Object 'Kind' is missing
I1205 20:56:49.653] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1205 20:56:49.825] generic-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:49.827] Successful
I1205 20:56:49.828] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1205 20:56:49.828] replicationcontroller "busybox0" force deleted
I1205 20:56:49.828] replicationcontroller "busybox1" force deleted
I1205 20:56:49.828] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:49.828] has:Object 'Kind' is missing
I1205 20:56:49.914] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:50.066] deployment.extensions/nginx1-deployment created
I1205 20:56:50.070] deployment.extensions/nginx0-deployment created
I1205 20:56:50.171] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1205 20:56:50.259] generic-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1205 20:56:50.457] generic-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1205 20:56:50.459] Successful
I1205 20:56:50.460] message:deployment.extensions/nginx1-deployment skipped rollback (current template already matches revision 1)
I1205 20:56:50.460] deployment.extensions/nginx0-deployment skipped rollback (current template already matches revision 1)
I1205 20:56:50.460] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1205 20:56:50.460] has:Object 'Kind' is missing
I1205 20:56:50.548] deployment.extensions/nginx1-deployment paused
I1205 20:56:50.551] deployment.extensions/nginx0-deployment paused
I1205 20:56:50.649] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1205 20:56:50.651] Successful
I1205 20:56:50.652] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1205 20:56:50.652] has:Object 'Kind' is missing
I1205 20:56:50.737] deployment.extensions/nginx1-deployment resumed
I1205 20:56:50.741] deployment.extensions/nginx0-deployment resumed
I1205 20:56:50.833] generic-resources.sh:408: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I1205 20:56:50.835] Successful
I1205 20:56:50.836] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1205 20:56:50.836] has:Object 'Kind' is missing
W1205 20:56:50.936] Error from server (NotFound): namespaces "non-native-resources" not found
W1205 20:56:50.937] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1205 20:56:50.937] I1205 20:56:43.521826   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043403-16921", Name:"test1", UID:"45591725-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"985", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W1205 20:56:50.937] I1205 20:56:43.526028   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043403-16921", Name:"test1-fb488bd5d", UID:"4559b47e-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"986", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-6xgcb
W1205 20:56:50.937] I1205 20:56:45.843488   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043403-2023", Name:"nginx", UID:"46bb627c-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1011", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W1205 20:56:50.938] I1205 20:56:45.846255   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043403-2023", Name:"nginx-6f6bb85d9c", UID:"46bbf229-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-ln644
W1205 20:56:50.938] I1205 20:56:45.849832   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043403-2023", Name:"nginx-6f6bb85d9c", UID:"46bbf229-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-dvtfb
W1205 20:56:50.938] I1205 20:56:45.851154   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043403-2023", Name:"nginx-6f6bb85d9c", UID:"46bbf229-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-xmpxv
W1205 20:56:50.938] kubectl convert is DEPRECATED and will be removed in a future version.
W1205 20:56:50.938] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1205 20:56:50.939] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1205 20:56:50.939] I1205 20:56:47.692894   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043403-2023", Name:"busybox0", UID:"47d59c44-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1042", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-7z9wz
W1205 20:56:50.939] I1205 20:56:47.695876   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043403-2023", Name:"busybox1", UID:"47d643c6-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1044", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-msxc7
W1205 20:56:50.939] I1205 20:56:47.699992   55589 namespace_controller.go:171] Namespace has been deleted non-native-resources
W1205 20:56:50.939] I1205 20:56:49.363201   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043403-2023", Name:"busybox0", UID:"47d59c44-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1063", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-p24xs
W1205 20:56:50.940] I1205 20:56:49.375789   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043403-2023", Name:"busybox1", UID:"47d643c6-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1067", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-zfszw
W1205 20:56:50.940] I1205 20:56:50.069426   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043403-2023", Name:"nginx1-deployment", UID:"4940283e-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1083", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W1205 20:56:50.940] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1205 20:56:50.940] I1205 20:56:50.072661   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043403-2023", Name:"nginx1-deployment-75f6fc6747", UID:"4940cb0b-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-bvb95
W1205 20:56:50.941] I1205 20:56:50.074391   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043403-2023", Name:"nginx0-deployment", UID:"4940fc5c-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1085", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W1205 20:56:50.941] I1205 20:56:50.075960   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043403-2023", Name:"nginx1-deployment-75f6fc6747", UID:"4940cb0b-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-kwb4b
W1205 20:56:50.941] I1205 20:56:50.079594   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043403-2023", Name:"nginx0-deployment-b6bb4ccbb", UID:"4941a771-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-78h6q
W1205 20:56:50.941] I1205 20:56:50.082974   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043403-2023", Name:"nginx0-deployment-b6bb4ccbb", UID:"4941a771-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-d9rcg
W1205 20:56:51.013] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1205 20:56:51.029] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1205 20:56:51.129] Successful
I1205 20:56:51.130] message:deployment.extensions/nginx1-deployment 
I1205 20:56:51.130] REVISION  CHANGE-CAUSE
I1205 20:56:51.130] 1         <none>
I1205 20:56:51.130] 
I1205 20:56:51.130] deployment.extensions/nginx0-deployment 
I1205 20:56:51.130] REVISION  CHANGE-CAUSE
I1205 20:56:51.130] 1         <none>
I1205 20:56:51.130] 
I1205 20:56:51.131] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1205 20:56:51.131] has:nginx0-deployment
I1205 20:56:51.131] Successful
I1205 20:56:51.131] message:deployment.extensions/nginx1-deployment 
I1205 20:56:51.131] REVISION  CHANGE-CAUSE
I1205 20:56:51.131] 1         <none>
I1205 20:56:51.131] 
I1205 20:56:51.132] deployment.extensions/nginx0-deployment 
I1205 20:56:51.132] REVISION  CHANGE-CAUSE
I1205 20:56:51.132] 1         <none>
I1205 20:56:51.132] 
I1205 20:56:51.132] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1205 20:56:51.132] has:nginx1-deployment
I1205 20:56:51.132] Successful
I1205 20:56:51.132] message:deployment.extensions/nginx1-deployment 
I1205 20:56:51.133] REVISION  CHANGE-CAUSE
I1205 20:56:51.133] 1         <none>
I1205 20:56:51.133] 
I1205 20:56:51.133] deployment.extensions/nginx0-deployment 
I1205 20:56:51.133] REVISION  CHANGE-CAUSE
I1205 20:56:51.133] 1         <none>
I1205 20:56:51.133] 
I1205 20:56:51.133] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1205 20:56:51.133] has:Object 'Kind' is missing
I1205 20:56:51.133] deployment.extensions "nginx1-deployment" force deleted
I1205 20:56:51.133] deployment.extensions "nginx0-deployment" force deleted
I1205 20:56:52.117] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:52.273] replicationcontroller/busybox0 created
I1205 20:56:52.277] replicationcontroller/busybox1 created
... skipping 7 lines ...
I1205 20:56:52.465] message:no rollbacker has been implemented for "ReplicationController"
I1205 20:56:52.465] no rollbacker has been implemented for "ReplicationController"
I1205 20:56:52.465] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:52.465] has:Object 'Kind' is missing
I1205 20:56:52.552] Successful
I1205 20:56:52.552] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:52.552] error: replicationcontrollers "busybox0" pausing is not supported
I1205 20:56:52.553] error: replicationcontrollers "busybox1" pausing is not supported
I1205 20:56:52.553] has:Object 'Kind' is missing
I1205 20:56:52.554] Successful
I1205 20:56:52.555] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:52.555] error: replicationcontrollers "busybox0" pausing is not supported
I1205 20:56:52.555] error: replicationcontrollers "busybox1" pausing is not supported
I1205 20:56:52.555] has:replicationcontrollers "busybox0" pausing is not supported
I1205 20:56:52.556] Successful
I1205 20:56:52.557] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:52.557] error: replicationcontrollers "busybox0" pausing is not supported
I1205 20:56:52.557] error: replicationcontrollers "busybox1" pausing is not supported
I1205 20:56:52.557] has:replicationcontrollers "busybox1" pausing is not supported
I1205 20:56:52.642] Successful
I1205 20:56:52.643] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:52.643] error: replicationcontrollers "busybox0" resuming is not supported
I1205 20:56:52.643] error: replicationcontrollers "busybox1" resuming is not supported
I1205 20:56:52.643] has:Object 'Kind' is missing
I1205 20:56:52.644] Successful
I1205 20:56:52.644] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:52.644] error: replicationcontrollers "busybox0" resuming is not supported
I1205 20:56:52.645] error: replicationcontrollers "busybox1" resuming is not supported
I1205 20:56:52.645] has:replicationcontrollers "busybox0" resuming is not supported
I1205 20:56:52.646] Successful
I1205 20:56:52.647] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:52.647] error: replicationcontrollers "busybox0" resuming is not supported
I1205 20:56:52.647] error: replicationcontrollers "busybox1" resuming is not supported
I1205 20:56:52.647] has:replicationcontrollers "busybox0" resuming is not supported
I1205 20:56:52.720] replicationcontroller "busybox0" force deleted
I1205 20:56:52.725] replicationcontroller "busybox1" force deleted
W1205 20:56:52.826] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1205 20:56:52.826] I1205 20:56:52.276855   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043403-2023", Name:"busybox0", UID:"4a911bd8-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1129", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-dwxnr
W1205 20:56:52.827] I1205 20:56:52.279610   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043403-2023", Name:"busybox1", UID:"4a91bf7d-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1131", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-hc4vc
W1205 20:56:52.827] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1205 20:56:52.827] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1205 20:56:53.745] +++ exit code: 0
I1205 20:56:53.792] Recording: run_namespace_tests
I1205 20:56:53.793] Running command: run_namespace_tests
I1205 20:56:53.812] 
I1205 20:56:53.815] +++ Running case: test-cmd.run_namespace_tests 
I1205 20:56:53.817] +++ working dir: /go/src/k8s.io/kubernetes
I1205 20:56:53.820] +++ command: run_namespace_tests
I1205 20:56:53.829] +++ [1205 20:56:53] Testing kubectl(v1:namespaces)
I1205 20:56:53.896] namespace/my-namespace created
I1205 20:56:53.983] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1205 20:56:54.058] namespace "my-namespace" deleted
I1205 20:56:59.185] namespace/my-namespace condition met
I1205 20:56:59.265] Successful
I1205 20:56:59.265] message:Error from server (NotFound): namespaces "my-namespace" not found
I1205 20:56:59.265] has: not found
I1205 20:56:59.371] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I1205 20:56:59.442] namespace/other created
I1205 20:56:59.528] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I1205 20:56:59.613] core.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:56:59.757] pod/valid-pod created
I1205 20:56:59.845] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1205 20:56:59.930] core.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1205 20:57:00.007] Successful
I1205 20:57:00.007] message:error: a resource cannot be retrieved by name across all namespaces
I1205 20:57:00.008] has:a resource cannot be retrieved by name across all namespaces
I1205 20:57:00.091] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1205 20:57:00.167] pod "valid-pod" force deleted
I1205 20:57:00.251] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:57:00.322] namespace "other" deleted
W1205 20:57:00.423] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1205 20:57:01.385] E1205 20:57:01.384522   55589 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1205 20:57:01.695] I1205 20:57:01.694841   55589 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1205 20:57:01.795] I1205 20:57:01.795216   55589 controller_utils.go:1034] Caches are synced for garbage collector controller
W1205 20:57:03.118] I1205 20:57:03.117679   55589 horizontal.go:309] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1544043403-2023
W1205 20:57:03.123] I1205 20:57:03.122442   55589 horizontal.go:309] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1544043403-2023
W1205 20:57:04.181] I1205 20:57:04.180878   55589 namespace_controller.go:171] Namespace has been deleted my-namespace
I1205 20:57:05.474] +++ exit code: 0
... skipping 113 lines ...
I1205 20:57:20.635] +++ command: run_client_config_tests
I1205 20:57:20.646] +++ [1205 20:57:20] Creating namespace namespace-1544043440-23186
I1205 20:57:20.714] namespace/namespace-1544043440-23186 created
I1205 20:57:20.778] Context "test" modified.
I1205 20:57:20.783] +++ [1205 20:57:20] Testing client config
I1205 20:57:20.846] Successful
I1205 20:57:20.847] message:error: stat missing: no such file or directory
I1205 20:57:20.847] has:missing: no such file or directory
I1205 20:57:20.910] Successful
I1205 20:57:20.910] message:error: stat missing: no such file or directory
I1205 20:57:20.910] has:missing: no such file or directory
I1205 20:57:20.975] Successful
I1205 20:57:20.975] message:error: stat missing: no such file or directory
I1205 20:57:20.975] has:missing: no such file or directory
I1205 20:57:21.038] Successful
I1205 20:57:21.039] message:Error in configuration: context was not found for specified context: missing-context
I1205 20:57:21.039] has:context was not found for specified context: missing-context
I1205 20:57:21.105] Successful
I1205 20:57:21.105] message:error: no server found for cluster "missing-cluster"
I1205 20:57:21.105] has:no server found for cluster "missing-cluster"
I1205 20:57:21.169] Successful
I1205 20:57:21.169] message:error: auth info "missing-user" does not exist
I1205 20:57:21.169] has:auth info "missing-user" does not exist
I1205 20:57:21.292] Successful
I1205 20:57:21.293] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1205 20:57:21.293] has:Error loading config file
I1205 20:57:21.355] Successful
I1205 20:57:21.355] message:error: stat missing-config: no such file or directory
I1205 20:57:21.355] has:no such file or directory
I1205 20:57:21.366] +++ exit code: 0
I1205 20:57:21.401] Recording: run_service_accounts_tests
I1205 20:57:21.401] Running command: run_service_accounts_tests
I1205 20:57:21.420] 
I1205 20:57:21.422] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 76 lines ...
I1205 20:57:28.585]                 job-name=test-job
I1205 20:57:28.585]                 run=pi
I1205 20:57:28.585] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1205 20:57:28.586] Parallelism:    1
I1205 20:57:28.586] Completions:    1
I1205 20:57:28.586] Start Time:     Wed, 05 Dec 2018 20:57:28 +0000
I1205 20:57:28.586] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1205 20:57:28.586] Pod Template:
I1205 20:57:28.586]   Labels:  controller-uid=600e3fc0-f8d0-11e8-83ce-0242ac110002
I1205 20:57:28.586]            job-name=test-job
I1205 20:57:28.586]            run=pi
I1205 20:57:28.586]   Containers:
I1205 20:57:28.586]    pi:
... skipping 329 lines ...
I1205 20:57:37.948]   selector:
I1205 20:57:37.948]     role: padawan
I1205 20:57:37.948]   sessionAffinity: None
I1205 20:57:37.948]   type: ClusterIP
I1205 20:57:37.948] status:
I1205 20:57:37.948]   loadBalancer: {}
W1205 20:57:38.048] error: you must specify resources by --filename when --local is set.
W1205 20:57:38.049] Example resource specifications include:
W1205 20:57:38.049]    '-f rsrc.yaml'
W1205 20:57:38.049]    '--filename=rsrc.json'
I1205 20:57:38.149] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1205 20:57:38.257] core.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1205 20:57:38.334] service "redis-master" deleted
... skipping 93 lines ...
I1205 20:57:43.959] apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1205 20:57:44.048] apps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1205 20:57:44.151] daemonset.extensions/bind rolled back
I1205 20:57:44.242] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1205 20:57:44.328] apps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1205 20:57:44.430] Successful
I1205 20:57:44.430] message:error: unable to find specified revision 1000000 in history
I1205 20:57:44.431] has:unable to find specified revision
I1205 20:57:44.522] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1205 20:57:44.608] apps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1205 20:57:44.715] daemonset.extensions/bind rolled back
I1205 20:57:44.810] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1205 20:57:44.898] apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I1205 20:57:46.165] Namespace:    namespace-1544043465-8024
I1205 20:57:46.165] Selector:     app=guestbook,tier=frontend
I1205 20:57:46.165] Labels:       app=guestbook
I1205 20:57:46.166]               tier=frontend
I1205 20:57:46.166] Annotations:  <none>
I1205 20:57:46.166] Replicas:     3 current / 3 desired
I1205 20:57:46.166] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:57:46.166] Pod Template:
I1205 20:57:46.166]   Labels:  app=guestbook
I1205 20:57:46.166]            tier=frontend
I1205 20:57:46.166]   Containers:
I1205 20:57:46.166]    php-redis:
I1205 20:57:46.167]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1205 20:57:46.272] Namespace:    namespace-1544043465-8024
I1205 20:57:46.272] Selector:     app=guestbook,tier=frontend
I1205 20:57:46.272] Labels:       app=guestbook
I1205 20:57:46.272]               tier=frontend
I1205 20:57:46.272] Annotations:  <none>
I1205 20:57:46.272] Replicas:     3 current / 3 desired
I1205 20:57:46.272] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:57:46.272] Pod Template:
I1205 20:57:46.272]   Labels:  app=guestbook
I1205 20:57:46.273]            tier=frontend
I1205 20:57:46.273]   Containers:
I1205 20:57:46.273]    php-redis:
I1205 20:57:46.273]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1205 20:57:46.374] Namespace:    namespace-1544043465-8024
I1205 20:57:46.375] Selector:     app=guestbook,tier=frontend
I1205 20:57:46.375] Labels:       app=guestbook
I1205 20:57:46.375]               tier=frontend
I1205 20:57:46.375] Annotations:  <none>
I1205 20:57:46.375] Replicas:     3 current / 3 desired
I1205 20:57:46.375] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:57:46.375] Pod Template:
I1205 20:57:46.375]   Labels:  app=guestbook
I1205 20:57:46.375]            tier=frontend
I1205 20:57:46.376]   Containers:
I1205 20:57:46.376]    php-redis:
I1205 20:57:46.376]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1205 20:57:46.480] Namespace:    namespace-1544043465-8024
I1205 20:57:46.480] Selector:     app=guestbook,tier=frontend
I1205 20:57:46.480] Labels:       app=guestbook
I1205 20:57:46.480]               tier=frontend
I1205 20:57:46.480] Annotations:  <none>
I1205 20:57:46.480] Replicas:     3 current / 3 desired
I1205 20:57:46.481] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:57:46.481] Pod Template:
I1205 20:57:46.481]   Labels:  app=guestbook
I1205 20:57:46.481]            tier=frontend
I1205 20:57:46.481]   Containers:
I1205 20:57:46.481]    php-redis:
I1205 20:57:46.481]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I1205 20:57:46.482]   Type    Reason            Age   From                    Message
I1205 20:57:46.482]   ----    ------            ----  ----                    -------
I1205 20:57:46.482]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-b2xgq
I1205 20:57:46.482]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-zdntx
I1205 20:57:46.482]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-56sxg
I1205 20:57:46.482] 
W1205 20:57:46.586] E1205 20:57:44.159281   55589 daemon_controller.go:303] namespace-1544043462-2982/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1544043462-2982", SelfLink:"/apis/apps/v1/namespaces/namespace-1544043462-2982/daemonsets/bind", UID:"68b2e6c2-f8d0-11e8-83ce-0242ac110002", ResourceVersion:"1344", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63679640262, loc:(*time.Location)(0x66fa920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1544043462-2982\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc003860620), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0041a9908), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0041109c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc003860660), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0026c9a28)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0041a9980)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1205 20:57:46.589] E1205 20:57:44.724341   55589 daemon_controller.go:303] namespace-1544043462-2982/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1544043462-2982", SelfLink:"/apis/apps/v1/namespaces/namespace-1544043462-2982/daemonsets/bind", UID:"68b2e6c2-f8d0-11e8-83ce-0242ac110002", ResourceVersion:"1348", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63679640262, loc:(*time.Location)(0x66fa920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1544043462-2982\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc004145d80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004537a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004174f60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc004145de0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0009aba08)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc004537b10)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1205 20:57:46.590] I1205 20:57:45.534429   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"6a4f20b5-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mt4sl
W1205 20:57:46.590] I1205 20:57:45.537968   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"6a4f20b5-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ljdgj
W1205 20:57:46.591] I1205 20:57:45.538025   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"6a4f20b5-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-spzjk
W1205 20:57:46.591] I1205 20:57:45.939307   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"6a8d47fa-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b2xgq
W1205 20:57:46.591] I1205 20:57:45.942107   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"6a8d47fa-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zdntx
W1205 20:57:46.591] I1205 20:57:45.943247   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"6a8d47fa-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-56sxg
... skipping 2 lines ...
I1205 20:57:46.692] Namespace:    namespace-1544043465-8024
I1205 20:57:46.692] Selector:     app=guestbook,tier=frontend
I1205 20:57:46.693] Labels:       app=guestbook
I1205 20:57:46.693]               tier=frontend
I1205 20:57:46.693] Annotations:  <none>
I1205 20:57:46.693] Replicas:     3 current / 3 desired
I1205 20:57:46.693] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:57:46.693] Pod Template:
I1205 20:57:46.693]   Labels:  app=guestbook
I1205 20:57:46.693]            tier=frontend
I1205 20:57:46.694]   Containers:
I1205 20:57:46.694]    php-redis:
I1205 20:57:46.694]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1205 20:57:46.730] Namespace:    namespace-1544043465-8024
I1205 20:57:46.730] Selector:     app=guestbook,tier=frontend
I1205 20:57:46.730] Labels:       app=guestbook
I1205 20:57:46.730]               tier=frontend
I1205 20:57:46.730] Annotations:  <none>
I1205 20:57:46.731] Replicas:     3 current / 3 desired
I1205 20:57:46.731] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:57:46.731] Pod Template:
I1205 20:57:46.731]   Labels:  app=guestbook
I1205 20:57:46.731]            tier=frontend
I1205 20:57:46.731]   Containers:
I1205 20:57:46.731]    php-redis:
I1205 20:57:46.731]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1205 20:57:46.827] Namespace:    namespace-1544043465-8024
I1205 20:57:46.827] Selector:     app=guestbook,tier=frontend
I1205 20:57:46.827] Labels:       app=guestbook
I1205 20:57:46.827]               tier=frontend
I1205 20:57:46.827] Annotations:  <none>
I1205 20:57:46.827] Replicas:     3 current / 3 desired
I1205 20:57:46.827] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:57:46.828] Pod Template:
I1205 20:57:46.828]   Labels:  app=guestbook
I1205 20:57:46.828]            tier=frontend
I1205 20:57:46.828]   Containers:
I1205 20:57:46.828]    php-redis:
I1205 20:57:46.828]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1205 20:57:46.933] Namespace:    namespace-1544043465-8024
I1205 20:57:46.933] Selector:     app=guestbook,tier=frontend
I1205 20:57:46.933] Labels:       app=guestbook
I1205 20:57:46.933]               tier=frontend
I1205 20:57:46.933] Annotations:  <none>
I1205 20:57:46.933] Replicas:     3 current / 3 desired
I1205 20:57:46.934] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:57:46.934] Pod Template:
I1205 20:57:46.934]   Labels:  app=guestbook
I1205 20:57:46.934]            tier=frontend
I1205 20:57:46.934]   Containers:
I1205 20:57:46.934]    php-redis:
I1205 20:57:46.934]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I1205 20:57:47.715] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I1205 20:57:47.799] core.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I1205 20:57:47.883] replicationcontroller/frontend scaled
I1205 20:57:47.971] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I1205 20:57:48.045] replicationcontroller "frontend" deleted
W1205 20:57:48.146] I1205 20:57:47.112091   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"6a8d47fa-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-zdntx
W1205 20:57:48.146] error: Expected replicas to be 3, was 2
W1205 20:57:48.147] I1205 20:57:47.633680   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"6a8d47fa-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xmwnc
W1205 20:57:48.147] I1205 20:57:47.888008   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"6a8d47fa-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1394", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-xmwnc
W1205 20:57:48.209] I1205 20:57:48.208791   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"redis-master", UID:"6be78d79-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1405", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-jqrc9
I1205 20:57:48.310] replicationcontroller/redis-master created
I1205 20:57:48.357] replicationcontroller/redis-slave created
I1205 20:57:48.455] replicationcontroller/redis-master scaled
... skipping 29 lines ...
I1205 20:57:49.900] service "expose-test-deployment" deleted
I1205 20:57:49.994] Successful
I1205 20:57:49.994] message:service/expose-test-deployment exposed
I1205 20:57:49.995] has:service/expose-test-deployment exposed
I1205 20:57:50.075] service "expose-test-deployment" deleted
I1205 20:57:50.161] Successful
I1205 20:57:50.161] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1205 20:57:50.161] See 'kubectl expose -h' for help and examples
I1205 20:57:50.161] has:invalid deployment: no selectors
I1205 20:57:50.245] Successful
I1205 20:57:50.245] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1205 20:57:50.245] See 'kubectl expose -h' for help and examples
I1205 20:57:50.246] has:invalid deployment: no selectors
W1205 20:57:50.346] I1205 20:57:49.294227   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment", UID:"6c8d358b-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1460", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-659fc6fb to 3
W1205 20:57:50.347] I1205 20:57:49.297332   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-659fc6fb", UID:"6c8dd09f-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-w7v84
W1205 20:57:50.347] I1205 20:57:49.300095   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-659fc6fb", UID:"6c8dd09f-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-xchbv
W1205 20:57:50.348] I1205 20:57:49.300291   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-659fc6fb", UID:"6c8dd09f-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-gb5fb
... skipping 30 lines ...
I1205 20:57:52.417] service "frontend" deleted
I1205 20:57:52.430] service "frontend-2" deleted
I1205 20:57:52.444] service "frontend-3" deleted
I1205 20:57:52.456] service "frontend-4" deleted
I1205 20:57:52.466] service "frontend-5" deleted
I1205 20:57:52.606] Successful
I1205 20:57:52.606] message:error: cannot expose a Node
I1205 20:57:52.606] has:cannot expose
I1205 20:57:52.786] Successful
I1205 20:57:52.787] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1205 20:57:52.787] has:metadata.name: Invalid value
I1205 20:57:52.932] Successful
I1205 20:57:52.933] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I1205 20:57:55.671] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1205 20:57:55.760] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1205 20:57:55.832] horizontalpodautoscaler.autoscaling "frontend" deleted
W1205 20:57:55.933] I1205 20:57:55.244126   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"70191370-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1631", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-45fbq
W1205 20:57:55.933] I1205 20:57:55.246943   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"70191370-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1631", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9bb56
W1205 20:57:55.933] I1205 20:57:55.248577   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043465-8024", Name:"frontend", UID:"70191370-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"1631", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dws78
W1205 20:57:55.934] Error: required flag(s) "max" not set
W1205 20:57:55.934] 
W1205 20:57:55.934] 
W1205 20:57:55.934] Examples:
W1205 20:57:55.934]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1205 20:57:55.934]   kubectl autoscale deployment foo --min=2 --max=10
W1205 20:57:55.934]   
... skipping 54 lines ...
I1205 20:57:56.139]           limits:
I1205 20:57:56.139]             cpu: 300m
I1205 20:57:56.140]           requests:
I1205 20:57:56.140]             cpu: 300m
I1205 20:57:56.140]       terminationGracePeriodSeconds: 0
I1205 20:57:56.140] status: {}
W1205 20:57:56.240] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I1205 20:57:56.369] deployment.extensions/nginx-deployment-resources created
I1205 20:57:56.472] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1205 20:57:56.556] core.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1205 20:57:56.641] core.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1205 20:57:56.729] deployment.extensions/nginx-deployment-resources resource requirements updated
I1205 20:57:56.820] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 82 lines ...
W1205 20:57:57.786] I1205 20:57:56.732743   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources", UID:"70c53256-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1666", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W1205 20:57:57.787] I1205 20:57:56.735864   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-6c5996c457", UID:"70fcbdfa-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1667", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-m69xc
W1205 20:57:57.787] I1205 20:57:56.738523   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources", UID:"70c53256-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1666", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W1205 20:57:57.787] I1205 20:57:56.743114   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources", UID:"70c53256-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 2
W1205 20:57:57.788] I1205 20:57:56.743449   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-69c96fd869", UID:"70c5e0df-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-kn58r
W1205 20:57:57.788] I1205 20:57:56.748737   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-6c5996c457", UID:"70fcbdfa-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1681", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-6cxx9
W1205 20:57:57.788] error: unable to find container named redis
W1205 20:57:57.788] I1205 20:57:57.084195   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources", UID:"70c53256-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 0
W1205 20:57:57.788] I1205 20:57:57.089770   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-69c96fd869", UID:"70c5e0df-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-48pzl
W1205 20:57:57.789] I1205 20:57:57.089819   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-69c96fd869", UID:"70c5e0df-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-b7xqh
W1205 20:57:57.789] I1205 20:57:57.089929   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources", UID:"70c53256-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 2
W1205 20:57:57.789] I1205 20:57:57.093893   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-5f4579485f", UID:"7131640c-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-88vkh
W1205 20:57:57.789] I1205 20:57:57.096911   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-5f4579485f", UID:"7131640c-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-rpjjq
W1205 20:57:57.790] I1205 20:57:57.344445   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources", UID:"70c53256-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1718", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-6c5996c457 to 0
W1205 20:57:57.790] I1205 20:57:57.350845   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources", UID:"70c53256-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 2
W1205 20:57:57.790] I1205 20:57:57.479256   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-6c5996c457", UID:"70fcbdfa-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-6cxx9
W1205 20:57:57.791] I1205 20:57:57.529588   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-6c5996c457", UID:"70fcbdfa-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-m69xc
W1205 20:57:57.791] error: you must specify resources by --filename when --local is set.
W1205 20:57:57.791] Example resource specifications include:
W1205 20:57:57.791]    '-f rsrc.yaml'
W1205 20:57:57.791]    '--filename=rsrc.json'
W1205 20:57:57.878] I1205 20:57:57.877622   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-ff8d89cb6", UID:"7159273e-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1723", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-j49rt
I1205 20:57:57.979] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1205 20:57:57.979] core.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
... skipping 31 lines ...
I1205 20:57:59.153] message:extensions/v1beta1
I1205 20:57:59.153] has:extensions/v1beta1
I1205 20:57:59.233] Successful
I1205 20:57:59.233] message:apps/v1
I1205 20:57:59.233] has:apps/v1
W1205 20:57:59.334] I1205 20:57:58.027375   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043465-8024", Name:"nginx-deployment-resources-ff8d89cb6", UID:"7159273e-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1723", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-nddpx
W1205 20:57:59.334] E1205 20:57:58.225786   55589 replica_set.go:450] Sync "namespace-1544043465-8024/nginx-deployment-resources-ff8d89cb6" failed with replicasets.apps "nginx-deployment-resources-ff8d89cb6" not found
W1205 20:57:59.334] I1205 20:57:58.431014   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"test-nginx-extensions", UID:"71ff5453-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1756", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5b89c6c69f to 1
W1205 20:57:59.335] I1205 20:57:58.435479   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"test-nginx-extensions-5b89c6c69f", UID:"71fffb66-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5b89c6c69f-bs8gr
W1205 20:57:59.335] I1205 20:57:58.909996   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"test-nginx-apps", UID:"72488228-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1770", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-55c9b846cc to 1
W1205 20:57:59.335] I1205 20:57:58.915505   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"test-nginx-apps-55c9b846cc", UID:"72490fa8-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1771", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-apps-55c9b846cc-phbhm
I1205 20:57:59.436] Successful describe rs:
I1205 20:57:59.436] Name:           test-nginx-apps-55c9b846cc
... skipping 3 lines ...
I1205 20:57:59.436]                 pod-template-hash=55c9b846cc
I1205 20:57:59.437] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1205 20:57:59.437]                 deployment.kubernetes.io/max-replicas: 2
I1205 20:57:59.437]                 deployment.kubernetes.io/revision: 1
I1205 20:57:59.437] Controlled By:  Deployment/test-nginx-apps
I1205 20:57:59.437] Replicas:       1 current / 1 desired
I1205 20:57:59.437] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1205 20:57:59.437] Pod Template:
I1205 20:57:59.437]   Labels:  app=test-nginx-apps
I1205 20:57:59.437]            pod-template-hash=55c9b846cc
I1205 20:57:59.437]   Containers:
I1205 20:57:59.438]    nginx:
I1205 20:57:59.438]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
W1205 20:58:03.466] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W1205 20:58:03.467] I1205 20:58:02.976501   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"nginx", UID:"7464bd47-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1892", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 1
W1205 20:58:03.467] I1205 20:58:02.979565   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-9486b7cb7", UID:"74b586fc-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1893", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-pdpfh
W1205 20:58:03.467] I1205 20:58:02.983229   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"nginx", UID:"7464bd47-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1892", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-6f6bb85d9c to 2
W1205 20:58:03.468] I1205 20:58:02.988080   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"nginx", UID:"7464bd47-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1895", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 2
W1205 20:58:03.468] I1205 20:58:02.988747   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-6f6bb85d9c", UID:"7465565c-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1899", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-6f6bb85d9c-bf6gs
W1205 20:58:03.468] E1205 20:58:02.989420   55589 replica_set.go:450] Sync "namespace-1544043478-31624/nginx-9486b7cb7" failed with Operation cannot be fulfilled on replicasets.apps "nginx-9486b7cb7": the object has been modified; please apply your changes to the latest version and try again
W1205 20:58:03.468] I1205 20:58:02.992274   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-9486b7cb7", UID:"74b586fc-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1902", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-4mqsf
I1205 20:58:04.451] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1205 20:58:04.637] apps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1205 20:58:04.740] deployment.extensions/nginx rolled back
W1205 20:58:04.841] error: unable to find specified revision 1000000 in history
I1205 20:58:05.834] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1205 20:58:05.924] deployment.extensions/nginx paused
W1205 20:58:06.030] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I1205 20:58:06.131] deployment.extensions/nginx resumed
I1205 20:58:06.231] deployment.extensions/nginx rolled back
I1205 20:58:06.409]     deployment.kubernetes.io/revision-history: 1,3
W1205 20:58:06.594] error: desired revision (3) is different from the running revision (5)
I1205 20:58:06.740] deployment.extensions/nginx2 created
I1205 20:58:06.823] deployment.extensions "nginx2" deleted
I1205 20:58:06.902] deployment.extensions "nginx" deleted
I1205 20:58:06.994] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:58:07.148] deployment.extensions/nginx-deployment created
I1205 20:58:07.246] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 21 lines ...
W1205 20:58:08.757] I1205 20:58:07.157886   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-646d4f779d", UID:"77329ebc-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1969", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-rb6f9
W1205 20:58:08.757] I1205 20:58:07.515247   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment", UID:"77320533-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1982", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W1205 20:58:08.757] I1205 20:58:07.519487   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-85db47bbdb", UID:"776a17c5-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-v6g94
W1205 20:58:08.757] I1205 20:58:07.522800   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment", UID:"77320533-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1982", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1205 20:58:08.758] I1205 20:58:07.528549   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-646d4f779d", UID:"77329ebc-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-rtvj6
W1205 20:58:08.758] I1205 20:58:07.532809   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment", UID:"77320533-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1984", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 2
W1205 20:58:08.758] E1205 20:58:07.535434   55589 replica_set.go:450] Sync "namespace-1544043478-31624/nginx-deployment-85db47bbdb" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-85db47bbdb": the object has been modified; please apply your changes to the latest version and try again
W1205 20:58:08.758] I1205 20:58:07.538388   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-85db47bbdb", UID:"776a17c5-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-6ttpz
W1205 20:58:08.759] error: unable to find container named "redis"
W1205 20:58:08.759] I1205 20:58:08.664436   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment", UID:"77320533-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2015", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1205 20:58:08.759] I1205 20:58:08.669408   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-646d4f779d", UID:"77329ebc-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-wp7pm
W1205 20:58:08.759] I1205 20:58:08.669663   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-646d4f779d", UID:"77329ebc-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-rb6f9
W1205 20:58:08.760] I1205 20:58:08.674308   55589 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment", UID:"77320533-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2018", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 2
W1205 20:58:08.760] I1205 20:58:08.677925   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-dc756cc6", UID:"78185924-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-ksqdc
W1205 20:58:08.760] I1205 20:58:08.681016   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-dc756cc6", UID:"78185924-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-psjz9
... skipping 83 lines ...
I1205 20:58:13.332] Namespace:    namespace-1544043491-20296
I1205 20:58:13.332] Selector:     app=guestbook,tier=frontend
I1205 20:58:13.332] Labels:       app=guestbook
I1205 20:58:13.333]               tier=frontend
I1205 20:58:13.333] Annotations:  <none>
I1205 20:58:13.333] Replicas:     3 current / 3 desired
I1205 20:58:13.333] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:13.333] Pod Template:
I1205 20:58:13.333]   Labels:  app=guestbook
I1205 20:58:13.333]            tier=frontend
I1205 20:58:13.333]   Containers:
I1205 20:58:13.333]    php-redis:
I1205 20:58:13.333]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1205 20:58:13.442] Namespace:    namespace-1544043491-20296
I1205 20:58:13.443] Selector:     app=guestbook,tier=frontend
I1205 20:58:13.443] Labels:       app=guestbook
I1205 20:58:13.443]               tier=frontend
I1205 20:58:13.443] Annotations:  <none>
I1205 20:58:13.443] Replicas:     3 current / 3 desired
I1205 20:58:13.443] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:13.443] Pod Template:
I1205 20:58:13.443]   Labels:  app=guestbook
I1205 20:58:13.443]            tier=frontend
I1205 20:58:13.443]   Containers:
I1205 20:58:13.443]    php-redis:
I1205 20:58:13.443]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1205 20:58:13.545] Namespace:    namespace-1544043491-20296
I1205 20:58:13.545] Selector:     app=guestbook,tier=frontend
I1205 20:58:13.545] Labels:       app=guestbook
I1205 20:58:13.545]               tier=frontend
I1205 20:58:13.545] Annotations:  <none>
I1205 20:58:13.545] Replicas:     3 current / 3 desired
I1205 20:58:13.545] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:13.546] Pod Template:
I1205 20:58:13.546]   Labels:  app=guestbook
I1205 20:58:13.546]            tier=frontend
I1205 20:58:13.546]   Containers:
I1205 20:58:13.546]    php-redis:
I1205 20:58:13.546]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 6 lines ...
I1205 20:58:13.546]       GET_HOSTS_FROM:  dns
I1205 20:58:13.547]     Mounts:            <none>
I1205 20:58:13.547]   Volumes:             <none>
I1205 20:58:13.547] 
W1205 20:58:13.647] I1205 20:58:10.983958   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-5b795689cd", UID:"79000738-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-f6ftm
W1205 20:58:13.648] I1205 20:58:11.033235   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043478-31624", Name:"nginx-deployment-5b795689cd", UID:"79000738-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-d6k27
W1205 20:58:13.648] E1205 20:58:11.279407   55589 replica_set.go:450] Sync "namespace-1544043478-31624/nginx-deployment-65b869c68c" failed with replicasets.apps "nginx-deployment-65b869c68c" not found
W1205 20:58:13.648] E1205 20:58:11.379093   55589 replica_set.go:450] Sync "namespace-1544043478-31624/nginx-deployment-5766b7c95b" failed with replicasets.apps "nginx-deployment-5766b7c95b" not found
W1205 20:58:13.649] E1205 20:58:11.529082   55589 replica_set.go:450] Sync "namespace-1544043478-31624/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
W1205 20:58:13.649] E1205 20:58:11.579281   55589 replica_set.go:450] Sync "namespace-1544043478-31624/nginx-deployment-5b795689cd" failed with replicasets.apps "nginx-deployment-5b795689cd" not found
W1205 20:58:13.649] E1205 20:58:11.629068   55589 replica_set.go:450] Sync "namespace-1544043478-31624/nginx-deployment-7b8f7659b7" failed with replicasets.apps "nginx-deployment-7b8f7659b7" not found
W1205 20:58:13.649] I1205 20:58:11.811112   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend", UID:"79f88b68-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2180", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tr24l
W1205 20:58:13.650] I1205 20:58:11.831012   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend", UID:"79f88b68-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2180", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r2nxj
W1205 20:58:13.650] I1205 20:58:11.931231   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend", UID:"79f88b68-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2180", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mxc42
W1205 20:58:13.650] E1205 20:58:12.129084   55589 replica_set.go:450] Sync "namespace-1544043491-20296/frontend" failed with replicasets.apps "frontend" not found
W1205 20:58:13.650] I1205 20:58:12.229946   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend-no-cascade", UID:"7a38ebd7-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-f76g5
W1205 20:58:13.651] I1205 20:58:12.280749   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend-no-cascade", UID:"7a38ebd7-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-bx954
W1205 20:58:13.651] I1205 20:58:12.330369   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend-no-cascade", UID:"7a38ebd7-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-wvrt6
W1205 20:58:13.651] E1205 20:58:12.579181   55589 replica_set.go:450] Sync "namespace-1544043491-20296/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W1205 20:58:13.651] I1205 20:58:13.099505   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend", UID:"7abd9417-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-85fhx
W1205 20:58:13.652] I1205 20:58:13.102189   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend", UID:"7abd9417-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jgg6v
W1205 20:58:13.652] I1205 20:58:13.102247   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend", UID:"7abd9417-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7ddv7
I1205 20:58:13.752] apps.sh:543: Successful describe
I1205 20:58:13.753] Name:         frontend
I1205 20:58:13.753] Namespace:    namespace-1544043491-20296
I1205 20:58:13.753] Selector:     app=guestbook,tier=frontend
I1205 20:58:13.753] Labels:       app=guestbook
I1205 20:58:13.753]               tier=frontend
I1205 20:58:13.753] Annotations:  <none>
I1205 20:58:13.753] Replicas:     3 current / 3 desired
I1205 20:58:13.753] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:13.753] Pod Template:
I1205 20:58:13.753]   Labels:  app=guestbook
I1205 20:58:13.754]            tier=frontend
I1205 20:58:13.754]   Containers:
I1205 20:58:13.754]    php-redis:
I1205 20:58:13.754]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1205 20:58:13.786] Namespace:    namespace-1544043491-20296
I1205 20:58:13.786] Selector:     app=guestbook,tier=frontend
I1205 20:58:13.786] Labels:       app=guestbook
I1205 20:58:13.786]               tier=frontend
I1205 20:58:13.786] Annotations:  <none>
I1205 20:58:13.786] Replicas:     3 current / 3 desired
I1205 20:58:13.787] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:13.787] Pod Template:
I1205 20:58:13.787]   Labels:  app=guestbook
I1205 20:58:13.787]            tier=frontend
I1205 20:58:13.787]   Containers:
I1205 20:58:13.787]    php-redis:
I1205 20:58:13.787]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1205 20:58:13.891] Namespace:    namespace-1544043491-20296
I1205 20:58:13.891] Selector:     app=guestbook,tier=frontend
I1205 20:58:13.892] Labels:       app=guestbook
I1205 20:58:13.892]               tier=frontend
I1205 20:58:13.892] Annotations:  <none>
I1205 20:58:13.892] Replicas:     3 current / 3 desired
I1205 20:58:13.892] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:13.892] Pod Template:
I1205 20:58:13.892]   Labels:  app=guestbook
I1205 20:58:13.892]            tier=frontend
I1205 20:58:13.892]   Containers:
I1205 20:58:13.892]    php-redis:
I1205 20:58:13.892]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1205 20:58:13.990] Namespace:    namespace-1544043491-20296
I1205 20:58:13.990] Selector:     app=guestbook,tier=frontend
I1205 20:58:13.990] Labels:       app=guestbook
I1205 20:58:13.990]               tier=frontend
I1205 20:58:13.990] Annotations:  <none>
I1205 20:58:13.990] Replicas:     3 current / 3 desired
I1205 20:58:13.990] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:13.990] Pod Template:
I1205 20:58:13.991]   Labels:  app=guestbook
I1205 20:58:13.991]            tier=frontend
I1205 20:58:13.991]   Containers:
I1205 20:58:13.991]    php-redis:
I1205 20:58:13.991]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1205 20:58:14.095] Namespace:    namespace-1544043491-20296
I1205 20:58:14.095] Selector:     app=guestbook,tier=frontend
I1205 20:58:14.096] Labels:       app=guestbook
I1205 20:58:14.096]               tier=frontend
I1205 20:58:14.096] Annotations:  <none>
I1205 20:58:14.096] Replicas:     3 current / 3 desired
I1205 20:58:14.096] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:14.096] Pod Template:
I1205 20:58:14.096]   Labels:  app=guestbook
I1205 20:58:14.096]            tier=frontend
I1205 20:58:14.096]   Containers:
I1205 20:58:14.096]    php-redis:
I1205 20:58:14.096]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I1205 20:58:19.199] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1205 20:58:19.287] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1205 20:58:19.362] horizontalpodautoscaler.autoscaling "frontend" deleted
W1205 20:58:19.463] I1205 20:58:18.786630   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend", UID:"7e216f90-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4qtr8
W1205 20:58:19.464] I1205 20:58:18.789336   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend", UID:"7e216f90-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bskjm
W1205 20:58:19.464] I1205 20:58:18.789497   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544043491-20296", Name:"frontend", UID:"7e216f90-f8d0-11e8-83ce-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s47tb
W1205 20:58:19.464] Error: required flag(s) "max" not set
W1205 20:58:19.465] 
W1205 20:58:19.465] 
W1205 20:58:19.465] Examples:
W1205 20:58:19.465]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1205 20:58:19.465]   kubectl autoscale deployment foo --min=2 --max=10
W1205 20:58:19.465]   
... skipping 88 lines ...
I1205 20:58:22.377] apps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1205 20:58:22.464] apps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1205 20:58:22.566] statefulset.apps/nginx rolled back
I1205 20:58:22.659] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1205 20:58:22.747] apps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1205 20:58:22.848] Successful
I1205 20:58:22.849] message:error: unable to find specified revision 1000000 in history
I1205 20:58:22.849] has:unable to find specified revision
I1205 20:58:22.934] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1205 20:58:23.022] apps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1205 20:58:23.123] statefulset.apps/nginx rolled back
I1205 20:58:23.215] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I1205 20:58:23.304] apps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I1205 20:58:25.032] Name:         mock
I1205 20:58:25.032] Namespace:    namespace-1544043504-28425
I1205 20:58:25.032] Selector:     app=mock
I1205 20:58:25.032] Labels:       app=mock
I1205 20:58:25.032] Annotations:  <none>
I1205 20:58:25.032] Replicas:     1 current / 1 desired
I1205 20:58:25.032] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:25.032] Pod Template:
I1205 20:58:25.032]   Labels:  app=mock
I1205 20:58:25.032]   Containers:
I1205 20:58:25.032]    mock-container:
I1205 20:58:25.033]     Image:        k8s.gcr.io/pause:2.0
I1205 20:58:25.033]     Port:         9949/TCP
... skipping 56 lines ...
I1205 20:58:27.133] Name:         mock
I1205 20:58:27.133] Namespace:    namespace-1544043504-28425
I1205 20:58:27.133] Selector:     app=mock
I1205 20:58:27.133] Labels:       app=mock
I1205 20:58:27.133] Annotations:  <none>
I1205 20:58:27.134] Replicas:     1 current / 1 desired
I1205 20:58:27.134] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:27.134] Pod Template:
I1205 20:58:27.134]   Labels:  app=mock
I1205 20:58:27.134]   Containers:
I1205 20:58:27.134]    mock-container:
I1205 20:58:27.134]     Image:        k8s.gcr.io/pause:2.0
I1205 20:58:27.134]     Port:         9949/TCP
... skipping 56 lines ...
I1205 20:58:29.240] Name:         mock
I1205 20:58:29.240] Namespace:    namespace-1544043504-28425
I1205 20:58:29.240] Selector:     app=mock
I1205 20:58:29.240] Labels:       app=mock
I1205 20:58:29.240] Annotations:  <none>
I1205 20:58:29.240] Replicas:     1 current / 1 desired
I1205 20:58:29.240] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:29.241] Pod Template:
I1205 20:58:29.241]   Labels:  app=mock
I1205 20:58:29.241]   Containers:
I1205 20:58:29.241]    mock-container:
I1205 20:58:29.241]     Image:        k8s.gcr.io/pause:2.0
I1205 20:58:29.241]     Port:         9949/TCP
... skipping 42 lines ...
I1205 20:58:31.280] Namespace:    namespace-1544043504-28425
I1205 20:58:31.280] Selector:     app=mock
I1205 20:58:31.280] Labels:       app=mock
I1205 20:58:31.280]               status=replaced
I1205 20:58:31.280] Annotations:  <none>
I1205 20:58:31.280] Replicas:     1 current / 1 desired
I1205 20:58:31.280] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:31.280] Pod Template:
I1205 20:58:31.280]   Labels:  app=mock
I1205 20:58:31.281]   Containers:
I1205 20:58:31.281]    mock-container:
I1205 20:58:31.281]     Image:        k8s.gcr.io/pause:2.0
I1205 20:58:31.281]     Port:         9949/TCP
... skipping 11 lines ...
I1205 20:58:31.282] Namespace:    namespace-1544043504-28425
I1205 20:58:31.282] Selector:     app=mock2
I1205 20:58:31.282] Labels:       app=mock2
I1205 20:58:31.282]               status=replaced
I1205 20:58:31.282] Annotations:  <none>
I1205 20:58:31.283] Replicas:     1 current / 1 desired
I1205 20:58:31.283] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1205 20:58:31.283] Pod Template:
I1205 20:58:31.283]   Labels:  app=mock2
I1205 20:58:31.283]   Containers:
I1205 20:58:31.283]    mock-container:
I1205 20:58:31.283]     Image:        k8s.gcr.io/pause:2.0
I1205 20:58:31.283]     Port:         9949/TCP
... skipping 110 lines ...
I1205 20:58:36.564] persistentvolume/pv0001 created
I1205 20:58:36.681] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I1205 20:58:36.768] persistentvolume "pv0001" deleted
I1205 20:58:36.941] persistentvolume/pv0002 created
I1205 20:58:37.044] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I1205 20:58:37.129] persistentvolume "pv0002" deleted
W1205 20:58:37.230] E1205 20:58:36.944620   55589 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I1205 20:58:37.331] persistentvolume/pv0003 created
I1205 20:58:37.408] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I1205 20:58:37.491] persistentvolume "pv0003" deleted
I1205 20:58:37.595] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1205 20:58:37.612] +++ exit code: 0
I1205 20:58:37.655] Recording: run_persistent_volume_claims_tests
... skipping 475 lines ...
I1205 20:58:42.520] yes
I1205 20:58:42.520] has:the server doesn't have a resource type
I1205 20:58:42.600] Successful
I1205 20:58:42.601] message:yes
I1205 20:58:42.601] has:yes
I1205 20:58:42.678] Successful
I1205 20:58:42.678] message:error: --subresource can not be used with NonResourceURL
I1205 20:58:42.678] has:subresource can not be used with NonResourceURL
I1205 20:58:42.768] Successful
I1205 20:58:42.858] Successful
I1205 20:58:42.859] message:yes
I1205 20:58:42.859] 0
I1205 20:58:42.859] has:0
... skipping 6 lines ...
I1205 20:58:43.082] role.rbac.authorization.k8s.io/testing-R reconciled
I1205 20:58:43.186] legacy-script.sh:736: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I1205 20:58:43.285] legacy-script.sh:737: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I1205 20:58:43.384] legacy-script.sh:738: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I1205 20:58:43.486] legacy-script.sh:739: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I1205 20:58:43.570] Successful
I1205 20:58:43.570] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I1205 20:58:43.570] has:only rbac.authorization.k8s.io/v1 is supported
I1205 20:58:43.676] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I1205 20:58:43.685] role.rbac.authorization.k8s.io "testing-R" deleted
I1205 20:58:43.696] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I1205 20:58:43.706] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I1205 20:58:43.718] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I1205 20:58:44.814] +++ Running case: test-cmd.run_kubectl_explain_tests 
I1205 20:58:44.816] +++ working dir: /go/src/k8s.io/kubernetes
I1205 20:58:44.818] +++ command: run_kubectl_explain_tests
I1205 20:58:44.828] +++ [1205 20:58:44] Testing kubectl(v1:explain)
W1205 20:58:44.928] I1205 20:58:44.706790   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043524-2358", Name:"cassandra", UID:"8d5b25d0-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"2753", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-9xjwq
W1205 20:58:44.929] I1205 20:58:44.712463   55589 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544043524-2358", Name:"cassandra", UID:"8d5b25d0-f8d0-11e8-83ce-0242ac110002", APIVersion:"v1", ResourceVersion:"2753", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-cqgbp
W1205 20:58:44.929] E1205 20:58:44.718207   55589 replica_set.go:450] Sync "namespace-1544043524-2358/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1544043524-2358/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8d5b25d0-f8d0-11e8-83ce-0242ac110002, UID in object meta: 
I1205 20:58:45.030] KIND:     Pod
I1205 20:58:45.030] VERSION:  v1
I1205 20:58:45.030] 
I1205 20:58:45.030] DESCRIPTION:
I1205 20:58:45.030]      Pod is a collection of containers that can run on a host. This resource is
I1205 20:58:45.031]      created by clients and scheduled onto hosts.
... skipping 849 lines ...
I1205 20:59:10.714] message:node/127.0.0.1 already uncordoned (dry run)
I1205 20:59:10.714] has:already uncordoned
I1205 20:59:10.803] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I1205 20:59:10.885] node/127.0.0.1 labeled
I1205 20:59:10.977] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I1205 20:59:11.044] Successful
I1205 20:59:11.044] message:error: cannot specify both a node name and a --selector option
I1205 20:59:11.044] See 'kubectl drain -h' for help and examples
I1205 20:59:11.045] has:cannot specify both a node name
I1205 20:59:11.109] Successful
I1205 20:59:11.109] message:error: USAGE: cordon NODE [flags]
I1205 20:59:11.109] See 'kubectl cordon -h' for help and examples
I1205 20:59:11.109] has:error\: USAGE\: cordon NODE
I1205 20:59:11.185] node/127.0.0.1 already uncordoned
I1205 20:59:11.256] Successful
I1205 20:59:11.257] message:error: You must provide one or more resources by argument or filename.
I1205 20:59:11.257] Example resource specifications include:
I1205 20:59:11.257]    '-f rsrc.yaml'
I1205 20:59:11.257]    '--filename=rsrc.json'
I1205 20:59:11.257]    '<resource> <name>'
I1205 20:59:11.257]    '<resource>'
I1205 20:59:11.257] has:must provide one or more resources
... skipping 15 lines ...
I1205 20:59:11.676] Successful
I1205 20:59:11.677] message:The following kubectl-compatible plugins are available:
I1205 20:59:11.677] 
I1205 20:59:11.677] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I1205 20:59:11.677]   - warning: kubectl-version overwrites existing command: "kubectl version"
I1205 20:59:11.677] 
I1205 20:59:11.678] error: one plugin warning was found
I1205 20:59:11.678] has:kubectl-version overwrites existing command: "kubectl version"
I1205 20:59:11.745] Successful
I1205 20:59:11.745] message:The following kubectl-compatible plugins are available:
I1205 20:59:11.746] 
I1205 20:59:11.746] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1205 20:59:11.746] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I1205 20:59:11.746]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1205 20:59:11.746] 
I1205 20:59:11.746] error: one plugin warning was found
I1205 20:59:11.746] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I1205 20:59:11.815] Successful
I1205 20:59:11.816] message:The following kubectl-compatible plugins are available:
I1205 20:59:11.816] 
I1205 20:59:11.816] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1205 20:59:11.816] has:plugins are available
I1205 20:59:11.886] Successful
I1205 20:59:11.886] message:
I1205 20:59:11.886] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I1205 20:59:11.886] error: unable to find any kubectl plugins in your PATH
I1205 20:59:11.886] has:unable to find any kubectl plugins in your PATH
I1205 20:59:11.961] Successful
I1205 20:59:11.961] message:I am plugin foo
I1205 20:59:11.961] has:plugin foo
I1205 20:59:12.030] Successful
I1205 20:59:12.031] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.852+a0c2788249ae15", GitCommit:"a0c2788249ae1582d10089e7a34bb54fc6b3879d", GitTreeState:"clean", BuildDate:"2018-12-05T20:52:35Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I1205 20:59:12.107] 
I1205 20:59:12.109] +++ Running case: test-cmd.run_impersonation_tests 
I1205 20:59:12.111] +++ working dir: /go/src/k8s.io/kubernetes
I1205 20:59:12.114] +++ command: run_impersonation_tests
I1205 20:59:12.124] +++ [1205 20:59:12] Testing impersonation
I1205 20:59:12.190] Successful
I1205 20:59:12.190] message:error: requesting groups or user-extra for  without impersonating a user
I1205 20:59:12.190] has:without impersonating a user
I1205 20:59:12.343] certificatesigningrequest.certificates.k8s.io/foo created
I1205 20:59:12.430] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I1205 20:59:12.513] authorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I1205 20:59:12.590] certificatesigningrequest.certificates.k8s.io "foo" deleted
I1205 20:59:12.746] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 139 lines ...
W1205 20:59:13.264] I1205 20:59:13.244995   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:59:13.265] I1205 20:59:13.244997   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:59:13.265] I1205 20:59:13.244934   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:59:13.265] I1205 20:59:13.245004   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:59:13.265] I1205 20:59:13.244892   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:59:13.265] I1205 20:59:13.245082   52228 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1205 20:59:13.265] E1205 20:59:13.245109   52228 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W1205 20:59:13.290] + make test-integration
I1205 20:59:13.391] No resources found
I1205 20:59:13.391] pod "test-pod-1" force deleted
I1205 20:59:13.391] +++ [1205 20:59:13] TESTS PASSED
I1205 20:59:13.391] junit report dir: /workspace/artifacts
I1205 20:59:13.391] +++ [1205 20:59:13] Clean up complete
... skipping 227 lines ...
I1205 21:11:28.262] ok  	k8s.io/kubernetes/test/integration/replicationcontroller	55.072s
I1205 21:11:28.262] [restful] 2018/12/05 21:03:05 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:44353/swaggerapi
I1205 21:11:28.262] [restful] 2018/12/05 21:03:05 log.go:33: [restful/swagger] https://127.0.0.1:44353/swaggerui/ is mapped to folder /swagger-ui/
I1205 21:11:28.263] [restful] 2018/12/05 21:03:07 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:44353/swaggerapi
I1205 21:11:28.263] [restful] 2018/12/05 21:03:07 log.go:33: [restful/swagger] https://127.0.0.1:44353/swaggerui/ is mapped to folder /swagger-ui/
I1205 21:11:28.263] ok  	k8s.io/kubernetes/test/integration/scale	11.194s
I1205 21:11:28.263] FAIL	k8s.io/kubernetes/test/integration/scheduler	494.447s
I1205 21:11:28.263] ok  	k8s.io/kubernetes/test/integration/scheduler_perf	1.084s
I1205 21:11:28.263] ok  	k8s.io/kubernetes/test/integration/secrets	4.604s
I1205 21:11:28.263] ok  	k8s.io/kubernetes/test/integration/serviceaccount	45.728s
I1205 21:11:28.263] [restful] 2018/12/05 21:04:10 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:39309/swaggerapi
I1205 21:11:28.264] [restful] 2018/12/05 21:04:10 log.go:33: [restful/swagger] https://127.0.0.1:39309/swaggerui/ is mapped to folder /swagger-ui/
I1205 21:11:28.264] [restful] 2018/12/05 21:04:12 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:39309/swaggerapi
... skipping 7 lines ...
I1205 21:11:28.265] [restful] 2018/12/05 21:04:52 log.go:33: [restful/swagger] https://127.0.0.1:44105/swaggerui/ is mapped to folder /swagger-ui/
I1205 21:11:28.266] ok  	k8s.io/kubernetes/test/integration/tls	14.119s
I1205 21:11:28.266] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.049s
I1205 21:11:28.266] ok  	k8s.io/kubernetes/test/integration/volume	92.324s
I1205 21:11:28.266] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	146.305s
I1205 21:11:29.662] +++ [1205 21:11:29] Saved JUnit XML test report to /workspace/artifacts/junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181205-205922.xml
I1205 21:11:29.666] Makefile:184: recipe for target 'test' failed
I1205 21:11:29.676] +++ [1205 21:11:29] Cleaning up etcd
W1205 21:11:29.776] make[1]: *** [test] Error 1
W1205 21:11:29.777] !!! [1205 21:11:29] Call tree:
W1205 21:11:29.777] !!! [1205 21:11:29]  1: hack/make-rules/test-integration.sh:105 runTests(...)
W1205 21:11:29.846] make: *** [test-integration] Error 1
I1205 21:11:29.947] +++ [1205 21:11:29] Integration test cleanup complete
I1205 21:11:29.947] Makefile:203: recipe for target 'test-integration' failed
W1205 21:11:30.979] Traceback (most recent call last):
W1205 21:11:30.980]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 167, in <module>
W1205 21:11:30.980]     main(ARGS.branch, ARGS.script, ARGS.force, ARGS.prow)
W1205 21:11:30.980]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 136, in main
W1205 21:11:30.980]     check(*cmd)
W1205 21:11:30.981]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W1205 21:11:30.981]     subprocess.check_call(cmd)
W1205 21:11:30.981]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1205 21:11:31.013]     raise CalledProcessError(retcode, cmd)
W1205 21:11:31.014] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181105-ceed87206', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E1205 21:11:31.020] Command failed
I1205 21:11:31.020] process 530 exited with code 1 after 25.3m
E1205 21:11:31.021] FAIL: ci-kubernetes-integration-master
I1205 21:11:31.021] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1205 21:11:31.487] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1205 21:11:31.543] process 123450 exited with code 0 after 0.0m
I1205 21:11:31.544] Call:  gcloud config get-value account
I1205 21:11:31.811] process 123463 exited with code 0 after 0.0m
I1205 21:11:31.812] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1205 21:11:31.812] Upload result and artifacts...
I1205 21:11:31.812] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/7140
I1205 21:11:31.812] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7140/artifacts
W1205 21:11:33.524] CommandException: One or more URLs matched no objects.
E1205 21:11:33.713] Command failed
I1205 21:11:33.713] process 123476 exited with code 1 after 0.0m
W1205 21:11:33.713] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7140/artifacts not exist yet
I1205 21:11:33.714] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7140/artifacts
I1205 21:11:37.696] process 123621 exited with code 0 after 0.1m
W1205 21:11:37.697] metadata path /workspace/_artifacts/metadata.json does not exist
W1205 21:11:37.697] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...