ResultFAILURE
Tests 1 failed / 578 succeeded
Started2018-12-06 16:03
Elapsed27m1s
Versionv1.14.0-alpha.0.883+0351853ea1ae78
Buildergke-prow-default-pool-3c8994a8-293s
pod6a9be865-f970-11e8-b720-0a580a6c02d1
infra-commitea22e5d80
pod6a9be865-f970-11e8-b720-0a580a6c02d1
repok8s.io/kubernetes
repo-commit0351853ea1ae783ffe5db3cd6c1fef72bf5e57ec
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/replicaset TestAdoption 3.53s

go test -v k8s.io/kubernetes/test/integration/replicaset -run TestAdoption$
I1206 16:20:58.791692  118837 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I1206 16:20:58.791744  118837 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I1206 16:20:58.791758  118837 master.go:228] Using reconciler: 
I1206 16:20:58.793990  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.794009  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.794091  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.794157  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.799287  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.805652  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.805701  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.805801  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.805868  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.806801  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.807198  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.807245  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.807396  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.807577  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.809039  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.810033  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.810112  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.810202  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.810842  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.811761  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.812944  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.814842  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.814895  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.815098  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.816069  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.816612  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.816641  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.816700  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.816798  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.817080  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.818197  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.818221  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.818248  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.818282  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.818630  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.819431  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.819489  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.819580  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.819646  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.820355  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.820612  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.820636  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.820690  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.820787  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.821090  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.821581  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.821603  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.821636  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.821788  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.822025  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.822541  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.822562  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.822604  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.822680  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.823185  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.823746  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.823767  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.823796  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.823859  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.824142  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.824762  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.824784  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.824815  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.824850  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.825207  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.827291  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.827314  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.827353  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.827407  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.827658  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.828323  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.828348  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.828380  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.828700  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.829398  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.829831  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.829849  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.829885  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.829926  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.830507  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.830766  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.830785  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.830814  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.830857  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.831198  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.847828  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.847864  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.847916  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.847984  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.848579  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.848929  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.848955  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.848992  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.849110  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.849448  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.849849  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.849921  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.849968  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.850112  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.850637  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.851501  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.851538  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.851580  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.851647  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.852172  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.852653  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.852703  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.852769  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.852808  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.853415  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.854111  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.854143  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.854201  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.854280  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.854777  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.855299  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.855346  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.855388  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.855440  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.856004  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.856624  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.856648  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.856736  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.856784  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.857061  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.858536  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.858593  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.858814  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.859849  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.861604  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.861627  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.861885  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.862029  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.862145  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.863153  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.864417  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.864462  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.864518  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.864608  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.866147  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.867249  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.867279  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.867344  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.867390  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.868191  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.868430  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.868457  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.868490  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.868605  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.869071  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.869400  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.869426  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.869458  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.869558  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.869911  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.870381  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.870430  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.870503  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.870561  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.871105  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.871438  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.871497  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.871544  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.871614  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.872203  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.872439  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.872513  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.872580  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.872770  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.873313  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.873472  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.873525  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.873646  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.873756  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.874351  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.874554  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.874579  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.874615  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.874783  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.875353  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.875809  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.875830  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.875862  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.875986  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.876792  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.877184  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.877276  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.877328  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.877412  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.877805  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.878128  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.878195  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.878235  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.878364  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.878583  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.878971  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.878996  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.879033  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.879106  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.879570  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.879889  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.879921  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.879954  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.880079  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.880356  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.880779  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.880800  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.880830  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.880894  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.881613  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.884452  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.884472  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.884508  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.884541  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.884924  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.885328  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.885345  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.885402  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.885496  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.886252  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.886254  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.886276  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.886334  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.886484  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.889048  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.889049  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.889084  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.889123  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.889342  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.890139  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.890282  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.890327  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.890367  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.890443  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.890891  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.891269  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.891285  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.891307  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.891376  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.891596  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.891954  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.891973  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.891994  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.892029  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.899756  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.899804  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.899824  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.899890  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.899993  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.900780  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.900808  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.900845  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.900938  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.901186  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.901557  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.901954  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.901981  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.902014  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.902182  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.902504  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.902839  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.902864  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.902895  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.902952  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.903324  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.903746  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.903769  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.903808  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.903863  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.904169  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.904562  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.904597  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.904632  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.904730  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.904949  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.905347  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.905369  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.905400  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.905543  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.905893  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.906326  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.906348  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.906378  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.906422  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.907238  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.907258  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.907299  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.907370  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.907610  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.907925  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.922025  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.922052  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.922316  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.922436  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.924025  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.925288  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.925314  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.925361  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.925424  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.934123  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.934449  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.934475  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.934517  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.934572  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.934862  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.935319  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.935333  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.935367  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.935481  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.935730  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.936223  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:58.936248  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:58.936280  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:58.936401  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:58.936971  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:20:58.943274  118837 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W1206 16:20:58.959989  118837 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1206 16:20:58.960634  118837 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1206 16:20:58.962625  118837 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1206 16:20:58.975617  118837 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I1206 16:20:59.791039  118837 clientconn.go:551] parsed scheme: ""
I1206 16:20:59.791071  118837 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 16:20:59.791115  118837 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 16:20:59.791201  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:59.791764  118837 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 16:20:59.983240  118837 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I1206 16:20:59.987006  118837 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I1206 16:20:59.987033  118837 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I1206 16:20:59.995558  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1206 16:21:00.001469  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1206 16:21:00.005319  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1206 16:21:00.008503  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I1206 16:21:00.011583  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I1206 16:21:00.014801  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I1206 16:21:00.018143  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1206 16:21:00.022001  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1206 16:21:00.025215  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1206 16:21:00.028170  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1206 16:21:00.031376  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I1206 16:21:00.034389  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1206 16:21:00.037780  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1206 16:21:00.042316  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1206 16:21:00.045333  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1206 16:21:00.048050  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1206 16:21:00.053896  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1206 16:21:00.057134  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1206 16:21:00.061022  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1206 16:21:00.064128  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1206 16:21:00.067484  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1206 16:21:00.070500  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1206 16:21:00.073516  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I1206 16:21:00.076517  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1206 16:21:00.080614  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1206 16:21:00.083987  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1206 16:21:00.088111  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1206 16:21:00.091797  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1206 16:21:00.094617  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1206 16:21:00.098214  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1206 16:21:00.101063  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1206 16:21:00.104082  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1206 16:21:00.107311  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1206 16:21:00.110222  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1206 16:21:00.113376  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1206 16:21:00.116442  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1206 16:21:00.120384  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1206 16:21:00.123487  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1206 16:21:00.126607  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1206 16:21:00.129494  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1206 16:21:00.132938  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1206 16:21:00.136279  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1206 16:21:00.140093  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1206 16:21:00.143319  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1206 16:21:00.146416  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1206 16:21:00.149259  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1206 16:21:00.151963  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1206 16:21:00.154820  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1206 16:21:00.157845  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1206 16:21:00.161154  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1206 16:21:00.180545  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1206 16:21:00.220390  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1206 16:21:00.260441  118837 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1206 16:21:00.301891  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1206 16:21:00.341012  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1206 16:21:00.380321  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1206 16:21:00.420276  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1206 16:21:00.460607  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1206 16:21:00.500502  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1206 16:21:00.540499  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1206 16:21:00.580495  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I1206 16:21:00.620499  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1206 16:21:00.660552  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1206 16:21:00.700417  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1206 16:21:00.745758  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1206 16:21:00.780382  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1206 16:21:00.820546  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1206 16:21:00.860973  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1206 16:21:00.900984  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1206 16:21:00.940928  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1206 16:21:00.980258  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1206 16:21:01.020453  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1206 16:21:01.060389  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1206 16:21:01.101328  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1206 16:21:01.140985  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1206 16:21:01.182113  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1206 16:21:01.221834  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1206 16:21:01.266209  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1206 16:21:01.300647  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1206 16:21:01.344541  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1206 16:21:01.380894  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1206 16:21:01.424143  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1206 16:21:01.461478  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1206 16:21:01.500760  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1206 16:21:01.540283  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1206 16:21:01.580621  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1206 16:21:01.620382  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1206 16:21:01.661666  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1206 16:21:01.700812  118837 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1206 16:21:01.744110  118837 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1206 16:21:01.782276  118837 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1206 16:21:01.822541  118837 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1206 16:21:01.861652  118837 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1206 16:21:01.900800  118837 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1206 16:21:01.942888  118837 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1206 16:21:01.985725  118837 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1206 16:21:02.021881  118837 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1206 16:21:02.065875  118837 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1206 16:21:02.103345  118837 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1206 16:21:02.149472  118837 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1206 16:21:02.181205  118837 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1206 16:21:02.220966  118837 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W1206 16:21:02.284056  118837 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W1206 16:21:02.284129  118837 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I1206 16:21:02.312510  118837 controller.go:170] Shutting down kubernetes service endpoint reconciler
				from junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181206-161731.xml

Filter through log files | View test history on testgrid


Show 578 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1206 16:03:57.062] process 197 exited with code 0 after 0.0m
I1206 16:03:57.062] Call:  gcloud config get-value account
I1206 16:03:57.301] process 210 exited with code 0 after 0.0m
I1206 16:03:57.301] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1206 16:03:57.301] Call:  kubectl get -oyaml pods/6a9be865-f970-11e8-b720-0a580a6c02d1
W1206 16:03:57.555] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1206 16:03:57.559] Command failed
I1206 16:03:57.559] process 223 exited with code 1 after 0.0m
E1206 16:03:57.559] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/6a9be865-f970-11e8-b720-0a580a6c02d1']' returned non-zero exit status 1
I1206 16:03:57.559] Root: /workspace
I1206 16:03:57.559] cd to /workspace
I1206 16:03:57.559] Checkout: /workspace/k8s.io/kubernetes master to /workspace/k8s.io/kubernetes
I1206 16:03:57.560] Call:  git init k8s.io/kubernetes
... skipping 800 lines ...
W1206 16:12:38.225] I1206 16:12:38.224599   55476 controller_utils.go:1027] Waiting for caches to sync for PVC protection controller
W1206 16:12:38.225] I1206 16:12:38.224864   55476 controllermanager.go:516] Started "disruption"
W1206 16:12:38.226] I1206 16:12:38.225104   55476 disruption.go:288] Starting disruption controller
W1206 16:12:38.226] I1206 16:12:38.225119   55476 cronjob_controller.go:92] Starting CronJob Manager
W1206 16:12:38.226] I1206 16:12:38.225124   55476 controller_utils.go:1027] Waiting for caches to sync for disruption controller
W1206 16:12:38.226] I1206 16:12:38.225107   55476 controllermanager.go:516] Started "cronjob"
W1206 16:12:38.227] E1206 16:12:38.225661   55476 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1206 16:12:38.227] W1206 16:12:38.225732   55476 controllermanager.go:508] Skipping "service"
W1206 16:12:38.227] I1206 16:12:38.226912   55476 controllermanager.go:516] Started "persistentvolume-expander"
W1206 16:12:38.227] I1206 16:12:38.227348   55476 expand_controller.go:153] Starting expand controller
W1206 16:12:38.227] I1206 16:12:38.227539   55476 controller_utils.go:1027] Waiting for caches to sync for expand controller
W1206 16:12:38.228] W1206 16:12:38.227756   55476 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W1206 16:12:38.229] I1206 16:12:38.228838   55476 controllermanager.go:516] Started "attachdetach"
... skipping 25 lines ...
W1206 16:12:38.244] I1206 16:12:38.240478   55476 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
W1206 16:12:38.244] I1206 16:12:38.240514   55476 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
W1206 16:12:38.244] I1206 16:12:38.240546   55476 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
W1206 16:12:38.244] I1206 16:12:38.240624   55476 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
W1206 16:12:38.244] I1206 16:12:38.240685   55476 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
W1206 16:12:38.244] I1206 16:12:38.240737   55476 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W1206 16:12:38.245] E1206 16:12:38.240776   55476 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1206 16:12:38.245] I1206 16:12:38.240793   55476 controllermanager.go:516] Started "resourcequota"
W1206 16:12:38.245] I1206 16:12:38.240999   55476 resource_quota_controller.go:276] Starting resource quota controller
W1206 16:12:38.245] I1206 16:12:38.241043   55476 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W1206 16:12:38.245] I1206 16:12:38.241066   55476 resource_quota_monitor.go:301] QuotaMonitor running
W1206 16:12:38.246] I1206 16:12:38.241317   55476 controllermanager.go:516] Started "serviceaccount"
W1206 16:12:38.246] I1206 16:12:38.241348   55476 serviceaccounts_controller.go:115] Starting service account controller
... skipping 31 lines ...
W1206 16:12:38.256] I1206 16:12:38.254629   55476 controllermanager.go:516] Started "ttl"
W1206 16:12:38.256] W1206 16:12:38.254642   55476 controllermanager.go:495] "bootstrapsigner" is disabled
W1206 16:12:38.256] W1206 16:12:38.254647   55476 controllermanager.go:495] "tokencleaner" is disabled
W1206 16:12:38.257] W1206 16:12:38.254653   55476 controllermanager.go:508] Skipping "nodeipam"
W1206 16:12:38.257] I1206 16:12:38.254846   55476 ttl_controller.go:116] Starting TTL controller
W1206 16:12:38.257] I1206 16:12:38.254873   55476 controller_utils.go:1027] Waiting for caches to sync for TTL controller
W1206 16:12:38.257] W1206 16:12:38.254977   55476 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W1206 16:12:38.257] I1206 16:12:38.255490   55476 controllermanager.go:516] Started "garbagecollector"
W1206 16:12:38.257] I1206 16:12:38.255493   55476 garbagecollector.go:133] Starting garbage collector controller
W1206 16:12:38.258] I1206 16:12:38.255727   55476 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1206 16:12:38.258] I1206 16:12:38.255746   55476 graph_builder.go:308] GraphBuilder running
W1206 16:12:38.258] I1206 16:12:38.255993   55476 controllermanager.go:516] Started "job"
W1206 16:12:38.258] I1206 16:12:38.256206   55476 job_controller.go:143] Starting job controller
... skipping 7 lines ...
W1206 16:12:38.325] I1206 16:12:38.324737   55476 controller_utils.go:1034] Caches are synced for persistent volume controller
W1206 16:12:38.325] I1206 16:12:38.324854   55476 controller_utils.go:1034] Caches are synced for PVC protection controller
W1206 16:12:38.328] I1206 16:12:38.328146   55476 controller_utils.go:1034] Caches are synced for expand controller
W1206 16:12:38.329] I1206 16:12:38.329257   55476 controller_utils.go:1034] Caches are synced for attach detach controller
W1206 16:12:38.331] I1206 16:12:38.331462   55476 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W1206 16:12:38.332] I1206 16:12:38.331937   55476 controller_utils.go:1034] Caches are synced for ReplicationController controller
W1206 16:12:38.340] E1206 16:12:38.339723   55476 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W1206 16:12:38.341] I1206 16:12:38.341479   55476 controller_utils.go:1034] Caches are synced for service account controller
W1206 16:12:38.342] I1206 16:12:38.342421   55476 controller_utils.go:1034] Caches are synced for daemon sets controller
W1206 16:12:38.343] I1206 16:12:38.343019   55476 controller_utils.go:1034] Caches are synced for ReplicaSet controller
W1206 16:12:38.344] I1206 16:12:38.343809   52130 controller.go:608] quota admission added evaluator for: serviceaccounts
W1206 16:12:38.345] E1206 16:12:38.344948   55476 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1206 16:12:38.352] I1206 16:12:38.352027   55476 controller_utils.go:1034] Caches are synced for namespace controller
W1206 16:12:38.353] I1206 16:12:38.353267   55476 controller_utils.go:1034] Caches are synced for GC controller
W1206 16:12:38.353] I1206 16:12:38.353614   55476 controller_utils.go:1034] Caches are synced for deployment controller
W1206 16:12:38.354] I1206 16:12:38.354472   55476 controller_utils.go:1034] Caches are synced for PV protection controller
W1206 16:12:38.355] I1206 16:12:38.355014   55476 controller_utils.go:1034] Caches are synced for TTL controller
W1206 16:12:38.356] I1206 16:12:38.356469   55476 controller_utils.go:1034] Caches are synced for job controller
... skipping 2 lines ...
W1206 16:12:38.424] I1206 16:12:38.423772   55476 controller_utils.go:1034] Caches are synced for stateful set controller
I1206 16:12:38.525] +++ [1206 16:12:38] On try 3, controller-manager: ok
I1206 16:12:38.548] node/127.0.0.1 created
I1206 16:12:38.557] +++ [1206 16:12:38] Checking kubectl version
I1206 16:12:38.618] Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.883+0351853ea1ae78", GitCommit:"0351853ea1ae783ffe5db3cd6c1fef72bf5e57ec", GitTreeState:"clean", BuildDate:"2018-12-06T16:10:48Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
I1206 16:12:38.618] Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.883+0351853ea1ae78", GitCommit:"0351853ea1ae783ffe5db3cd6c1fef72bf5e57ec", GitTreeState:"clean", BuildDate:"2018-12-06T16:11:07Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
W1206 16:12:38.719] W1206 16:12:38.550011   55476 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1206 16:12:38.719] I1206 16:12:38.552751   55476 controller_utils.go:1034] Caches are synced for taint controller
W1206 16:12:38.719] I1206 16:12:38.552908   55476 node_lifecycle_controller.go:1222] Initializing eviction metric for zone: 
W1206 16:12:38.720] I1206 16:12:38.553087   55476 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"c02afa56-f971-11e8-9434-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W1206 16:12:38.720] I1206 16:12:38.552956   55476 taint_manager.go:198] Starting NoExecuteTaintManager
W1206 16:12:38.720] I1206 16:12:38.553164   55476 node_lifecycle_controller.go:1072] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W1206 16:12:38.720] I1206 16:12:38.554110   55476 controller_utils.go:1034] Caches are synced for certificate controller
... skipping 28 lines ...
I1206 16:12:39.524] Successful: --output json has correct server info
I1206 16:12:39.528] +++ [1206 16:12:39] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I1206 16:12:39.662] Successful: --client --output json has correct client info
I1206 16:12:39.670] Successful: --client --output json has no server info
I1206 16:12:39.673] +++ [1206 16:12:39] Testing kubectl version: compare json output using additional --short flag
W1206 16:12:39.773] I1206 16:12:39.719849   55476 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1206 16:12:39.774] E1206 16:12:39.734309   55476 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1206 16:12:39.774] I1206 16:12:39.756049   55476 controller_utils.go:1034] Caches are synced for garbage collector controller
W1206 16:12:39.774] I1206 16:12:39.756116   55476 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W1206 16:12:39.820] I1206 16:12:39.820192   55476 controller_utils.go:1034] Caches are synced for garbage collector controller
I1206 16:12:39.921] Successful: --short --output client json info is equal to non short result
I1206 16:12:39.921] Successful: --short --output server json info is equal to non short result
I1206 16:12:39.922] +++ [1206 16:12:39] Testing kubectl version: compare json output with yaml output
... skipping 45 lines ...
I1206 16:12:42.560] +++ working dir: /go/src/k8s.io/kubernetes
I1206 16:12:42.562] +++ command: run_RESTMapper_evaluation_tests
I1206 16:12:42.575] +++ [1206 16:12:42] Creating namespace namespace-1544112762-7736
I1206 16:12:42.643] namespace/namespace-1544112762-7736 created
I1206 16:12:42.708] Context "test" modified.
I1206 16:12:42.715] +++ [1206 16:12:42] Testing RESTMapper
I1206 16:12:42.830] +++ [1206 16:12:42] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1206 16:12:42.845] +++ exit code: 0
I1206 16:12:42.961] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1206 16:12:42.962] bindings                                                                      true         Binding
I1206 16:12:42.962] componentstatuses                 cs                                          false        ComponentStatus
I1206 16:12:42.962] configmaps                        cm                                          true         ConfigMap
I1206 16:12:42.962] endpoints                         ep                                          true         Endpoints
... skipping 606 lines ...
I1206 16:13:01.890] poddisruptionbudget.policy/test-pdb-3 created
I1206 16:13:01.982] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1206 16:13:02.054] poddisruptionbudget.policy/test-pdb-4 created
I1206 16:13:02.149] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1206 16:13:02.307] core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:02.478] pod/env-test-pod created
W1206 16:13:02.578] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1206 16:13:02.579] error: setting 'all' parameter but found a non empty selector. 
W1206 16:13:02.579] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1206 16:13:02.579] I1206 16:13:01.565466   52130 controller.go:608] quota admission added evaluator for: poddisruptionbudgets.policy
W1206 16:13:02.579] error: min-available and max-unavailable cannot be both specified
I1206 16:13:02.680] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I1206 16:13:02.680] Name:               env-test-pod
I1206 16:13:02.680] Namespace:          test-kubectl-describe-pod
I1206 16:13:02.680] Priority:           0
I1206 16:13:02.680] PriorityClassName:  <none>
I1206 16:13:02.680] Node:               <none>
... skipping 145 lines ...
W1206 16:13:14.543] I1206 16:13:13.404288   55476 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W1206 16:13:14.543] I1206 16:13:14.081183   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112789-20963", Name:"modified", UID:"d55833e9-f971-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-vvqx2
I1206 16:13:14.685] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:14.828] pod/valid-pod created
I1206 16:13:14.920] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 16:13:15.067] Successful
I1206 16:13:15.067] message:Error from server: cannot restore map from string
I1206 16:13:15.067] has:cannot restore map from string
I1206 16:13:15.149] Successful
I1206 16:13:15.150] message:pod/valid-pod patched (no change)
I1206 16:13:15.150] has:patched (no change)
I1206 16:13:15.233] pod/valid-pod patched
I1206 16:13:15.328] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I1206 16:13:15.838] pod/valid-pod patched
I1206 16:13:15.931] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1206 16:13:16.003] pod/valid-pod patched
I1206 16:13:16.096] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1206 16:13:16.254] pod/valid-pod patched
I1206 16:13:16.350] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1206 16:13:16.521] +++ [1206 16:13:16] "kubectl patch with resourceVersion 487" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W1206 16:13:16.621] E1206 16:13:15.059909   52130 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I1206 16:13:16.756] pod "valid-pod" deleted
I1206 16:13:16.768] pod/valid-pod replaced
I1206 16:13:16.862] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1206 16:13:17.007] Successful
I1206 16:13:17.007] message:error: --grace-period must have --force specified
I1206 16:13:17.007] has:\-\-grace-period must have \-\-force specified
I1206 16:13:17.153] Successful
I1206 16:13:17.153] message:error: --timeout must have --force specified
I1206 16:13:17.153] has:\-\-timeout must have \-\-force specified
W1206 16:13:17.296] W1206 16:13:17.295504   55476 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1206 16:13:17.396] node/node-v1-test created
I1206 16:13:17.443] node/node-v1-test replaced
I1206 16:13:17.535] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1206 16:13:17.612] node "node-v1-test" deleted
I1206 16:13:17.709] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1206 16:13:17.961] core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 58 lines ...
I1206 16:13:22.844] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:22.985] pod/test-pod created
W1206 16:13:23.085] Edit cancelled, no changes made.
W1206 16:13:23.086] Edit cancelled, no changes made.
W1206 16:13:23.086] Edit cancelled, no changes made.
W1206 16:13:23.086] Edit cancelled, no changes made.
W1206 16:13:23.086] error: 'name' already has a value (valid-pod), and --overwrite is false
W1206 16:13:23.086] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1206 16:13:23.086] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1206 16:13:23.187] pod "test-pod" deleted
I1206 16:13:23.187] +++ [1206 16:13:23] Creating namespace namespace-1544112803-23770
I1206 16:13:23.230] namespace/namespace-1544112803-23770 created
I1206 16:13:23.298] Context "test" modified.
... skipping 41 lines ...
I1206 16:13:26.394] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1206 16:13:26.396] +++ working dir: /go/src/k8s.io/kubernetes
I1206 16:13:26.400] +++ command: run_kubectl_create_error_tests
I1206 16:13:26.411] +++ [1206 16:13:26] Creating namespace namespace-1544112806-30993
I1206 16:13:26.478] namespace/namespace-1544112806-30993 created
I1206 16:13:26.545] Context "test" modified.
I1206 16:13:26.551] +++ [1206 16:13:26] Testing kubectl create with error
W1206 16:13:26.652] Error: required flag(s) "filename" not set
W1206 16:13:26.652] 
W1206 16:13:26.652] 
W1206 16:13:26.652] Examples:
W1206 16:13:26.652]   # Create a pod using the data in pod.json.
W1206 16:13:26.652]   kubectl create -f ./pod.json
W1206 16:13:26.653]   
... skipping 38 lines ...
W1206 16:13:26.657]   kubectl create -f FILENAME [options]
W1206 16:13:26.657] 
W1206 16:13:26.657] Use "kubectl <command> --help" for more information about a given command.
W1206 16:13:26.657] Use "kubectl options" for a list of global command-line options (applies to all commands).
W1206 16:13:26.657] 
W1206 16:13:26.657] required flag(s) "filename" not set
I1206 16:13:26.770] +++ [1206 16:13:26] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1206 16:13:26.870] kubectl convert is DEPRECATED and will be removed in a future version.
W1206 16:13:26.870] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1206 16:13:26.971] +++ exit code: 0
I1206 16:13:26.981] Recording: run_kubectl_apply_tests
I1206 16:13:26.981] Running command: run_kubectl_apply_tests
I1206 16:13:27.003] 
... skipping 13 lines ...
I1206 16:13:28.006] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I1206 16:13:28.892] deployment.extensions "test-deployment-retainkeys" deleted
I1206 16:13:28.984] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:29.136] pod/selector-test-pod created
I1206 16:13:29.230] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1206 16:13:29.313] Successful
I1206 16:13:29.313] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1206 16:13:29.313] has:pods "selector-test-pod-dont-apply" not found
I1206 16:13:29.391] pod "selector-test-pod" deleted
I1206 16:13:29.479] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:29.693] pod/test-pod created (server dry run)
I1206 16:13:29.787] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:29.935] pod/test-pod created
... skipping 12 lines ...
W1206 16:13:30.660] I1206 16:13:30.659495   52130 clientconn.go:551] parsed scheme: ""
W1206 16:13:30.660] I1206 16:13:30.659532   52130 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1206 16:13:30.660] I1206 16:13:30.659578   52130 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1206 16:13:30.660] I1206 16:13:30.659617   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:13:30.661] I1206 16:13:30.660142   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:13:30.739] I1206 16:13:30.738515   52130 controller.go:608] quota admission added evaluator for: resources.mygroup.example.com
W1206 16:13:30.824] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1206 16:13:30.925] kind.mygroup.example.com/myobj created (server dry run)
I1206 16:13:30.925] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1206 16:13:31.009] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:31.158] pod/a created
I1206 16:13:32.655] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I1206 16:13:32.739] Successful
I1206 16:13:32.739] message:Error from server (NotFound): pods "b" not found
I1206 16:13:32.739] has:pods "b" not found
I1206 16:13:32.884] pod/b created
I1206 16:13:32.899] pod/a pruned
I1206 16:13:34.585] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I1206 16:13:34.668] Successful
I1206 16:13:34.668] message:Error from server (NotFound): pods "a" not found
I1206 16:13:34.668] has:pods "a" not found
I1206 16:13:34.746] pod "b" deleted
I1206 16:13:34.842] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:34.988] pod/a created
I1206 16:13:35.084] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I1206 16:13:35.173] Successful
I1206 16:13:35.173] message:Error from server (NotFound): pods "b" not found
I1206 16:13:35.174] has:pods "b" not found
I1206 16:13:35.329] pod/b created
I1206 16:13:35.426] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I1206 16:13:35.510] apply.sh:166: Successful get pods b {{.metadata.name}}: b
I1206 16:13:35.589] pod "a" deleted
I1206 16:13:35.596] pod "b" deleted
I1206 16:13:35.762] Successful
I1206 16:13:35.762] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I1206 16:13:35.762] has:all resources selected for prune without explicitly passing --all
I1206 16:13:35.908] pod/a created
I1206 16:13:35.916] pod/b created
I1206 16:13:35.927] service/prune-svc created
I1206 16:13:37.430] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I1206 16:13:37.517] apply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 127 lines ...
I1206 16:13:49.823] Context "test" modified.
I1206 16:13:49.829] +++ [1206 16:13:49] Testing kubectl create filter
I1206 16:13:49.910] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:50.046] pod/selector-test-pod created
I1206 16:13:50.139] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1206 16:13:50.223] Successful
I1206 16:13:50.224] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1206 16:13:50.224] has:pods "selector-test-pod-dont-apply" not found
I1206 16:13:50.299] pod "selector-test-pod" deleted
I1206 16:13:50.317] +++ exit code: 0
I1206 16:13:50.349] Recording: run_kubectl_apply_deployments_tests
I1206 16:13:50.350] Running command: run_kubectl_apply_deployments_tests
I1206 16:13:50.370] 
... skipping 45 lines ...
I1206 16:13:52.295] apps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:52.380] apps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:52.463] apps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:13:52.615] deployment.extensions/nginx created
I1206 16:13:52.710] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I1206 16:13:56.904] Successful
I1206 16:13:56.905] message:Error from server (Conflict): error when applying patch:
I1206 16:13:56.905] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544112830-28948\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1206 16:13:56.905] to:
I1206 16:13:56.905] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I1206 16:13:56.906] Name: "nginx", Namespace: "namespace-1544112830-28948"
I1206 16:13:56.907] Object: &{map["status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2018-12-06T16:13:52Z" "lastTransitionTime":"2018-12-06T16:13:52Z" "reason":"MinimumReplicasUnavailable"]]] "kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["creationTimestamp":"2018-12-06T16:13:52Z" "labels":map["name":"nginx"] "annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544112830-28948\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n" "deployment.kubernetes.io/revision":"1"] "name":"nginx" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1544112830-28948/deployments/nginx" "uid":"ec509587-f971-11e8-9434-0242ac110002" "resourceVersion":"707" "generation":'\x01' "namespace":"namespace-1544112830-28948"] "spec":map["replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["labels":map["name":"nginx1"] "creationTimestamp":<nil>] "spec":map["dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e']] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647)]]}
I1206 16:13:56.907] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I1206 16:13:56.907] has:Error from server (Conflict)
W1206 16:13:57.008] I1206 16:13:52.619556   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112830-28948", Name:"nginx", UID:"ec509587-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W1206 16:13:57.009] I1206 16:13:52.622386   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112830-28948", Name:"nginx-5d56d6b95f", UID:"ec512ecc-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-6f866
W1206 16:13:57.009] I1206 16:13:52.625321   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112830-28948", Name:"nginx-5d56d6b95f", UID:"ec512ecc-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-fdrdt
W1206 16:13:57.009] I1206 16:13:52.625415   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112830-28948", Name:"nginx-5d56d6b95f", UID:"ec512ecc-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-kgvkn
I1206 16:14:02.121] deployment.extensions/nginx configured
I1206 16:14:02.214] Successful
I1206 16:14:02.215] message:        "name": "nginx2"
I1206 16:14:02.215]           "name": "nginx2"
I1206 16:14:02.215] has:"name": "nginx2"
W1206 16:14:02.316] I1206 16:14:02.124696   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112830-28948", Name:"nginx", UID:"f1fb0722-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W1206 16:14:02.317] I1206 16:14:02.129407   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112830-28948", Name:"nginx-7777658b9d", UID:"f1fbaa27-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-trndt
W1206 16:14:02.317] I1206 16:14:02.132457   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112830-28948", Name:"nginx-7777658b9d", UID:"f1fbaa27-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-mn62w
W1206 16:14:02.317] I1206 16:14:02.132517   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112830-28948", Name:"nginx-7777658b9d", UID:"f1fbaa27-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-qhbx8
W1206 16:14:06.433] E1206 16:14:06.432811   55476 replica_set.go:450] Sync "namespace-1544112830-28948/nginx-7777658b9d" failed with Operation cannot be fulfilled on replicasets.apps "nginx-7777658b9d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544112830-28948/nginx-7777658b9d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f1fbaa27-f971-11e8-9434-0242ac110002, UID in object meta: 
W1206 16:14:07.421] I1206 16:14:07.420323   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112830-28948", Name:"nginx", UID:"f5230edc-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W1206 16:14:07.424] I1206 16:14:07.423788   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112830-28948", Name:"nginx-7777658b9d", UID:"f523b3a6-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-wpv6j
W1206 16:14:07.426] I1206 16:14:07.426156   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112830-28948", Name:"nginx-7777658b9d", UID:"f523b3a6-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-pwv22
W1206 16:14:07.427] I1206 16:14:07.427176   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112830-28948", Name:"nginx-7777658b9d", UID:"f523b3a6-f971-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-hdn9v
I1206 16:14:07.528] Successful
I1206 16:14:07.528] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 73 lines ...
I1206 16:14:08.609] +++ [1206 16:14:08] Creating namespace namespace-1544112848-28905
I1206 16:14:08.680] namespace/namespace-1544112848-28905 created
I1206 16:14:08.750] Context "test" modified.
I1206 16:14:08.757] +++ [1206 16:14:08] Testing kubectl get
I1206 16:14:08.845] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:08.930] Successful
I1206 16:14:08.930] message:Error from server (NotFound): pods "abc" not found
I1206 16:14:08.930] has:pods "abc" not found
I1206 16:14:09.018] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:09.104] Successful
I1206 16:14:09.105] message:Error from server (NotFound): pods "abc" not found
I1206 16:14:09.105] has:pods "abc" not found
I1206 16:14:09.193] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:09.277] Successful
I1206 16:14:09.277] message:{
I1206 16:14:09.278]     "apiVersion": "v1",
I1206 16:14:09.278]     "items": [],
... skipping 23 lines ...
I1206 16:14:09.606] has not:No resources found
I1206 16:14:09.686] Successful
I1206 16:14:09.687] message:NAME
I1206 16:14:09.687] has not:No resources found
I1206 16:14:09.776] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:09.890] Successful
I1206 16:14:09.890] message:error: the server doesn't have a resource type "foobar"
I1206 16:14:09.890] has not:No resources found
I1206 16:14:09.973] Successful
I1206 16:14:09.974] message:No resources found.
I1206 16:14:09.974] has:No resources found
I1206 16:14:10.056] Successful
I1206 16:14:10.056] message:
I1206 16:14:10.056] has not:No resources found
I1206 16:14:10.138] Successful
I1206 16:14:10.138] message:No resources found.
I1206 16:14:10.139] has:No resources found
I1206 16:14:10.227] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:10.311] Successful
I1206 16:14:10.312] message:Error from server (NotFound): pods "abc" not found
I1206 16:14:10.312] has:pods "abc" not found
I1206 16:14:10.313] FAIL!
I1206 16:14:10.314] message:Error from server (NotFound): pods "abc" not found
I1206 16:14:10.314] has not:List
I1206 16:14:10.314] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1206 16:14:10.428] Successful
I1206 16:14:10.428] message:I1206 16:14:10.376256   67685 loader.go:359] Config loaded from file /tmp/tmp.WbkFmQIcwJ/.kube/config
I1206 16:14:10.429] I1206 16:14:10.376827   67685 loader.go:359] Config loaded from file /tmp/tmp.WbkFmQIcwJ/.kube/config
I1206 16:14:10.429] I1206 16:14:10.378301   67685 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I1206 16:14:13.922] }
I1206 16:14:14.009] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 16:14:14.253] <no value>Successful
I1206 16:14:14.253] message:valid-pod:
I1206 16:14:14.253] has:valid-pod:
I1206 16:14:14.336] Successful
I1206 16:14:14.336] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1206 16:14:14.336] 	template was:
I1206 16:14:14.336] 		{.missing}
I1206 16:14:14.336] 	object given to jsonpath engine was:
I1206 16:14:14.337] 		map[string]interface {}{"spec":map[string]interface {}{"schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"valid-pod", "namespace":"namespace-1544112853-30339", "selfLink":"/api/v1/namespaces/namespace-1544112853-30339/pods/valid-pod", "uid":"f8f66bdc-f971-11e8-9434-0242ac110002", "resourceVersion":"799", "creationTimestamp":"2018-12-06T16:14:13Z", "labels":map[string]interface {}{"name":"valid-pod"}}}
I1206 16:14:14.337] has:missing is not found
I1206 16:14:14.418] Successful
I1206 16:14:14.419] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1206 16:14:14.419] 	template was:
I1206 16:14:14.419] 		{{.missing}}
I1206 16:14:14.419] 	raw data was:
I1206 16:14:14.419] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2018-12-06T16:14:13Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1544112853-30339","resourceVersion":"799","selfLink":"/api/v1/namespaces/namespace-1544112853-30339/pods/valid-pod","uid":"f8f66bdc-f971-11e8-9434-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1206 16:14:14.420] 	object given to template engine was:
I1206 16:14:14.420] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2018-12-06T16:14:13Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1544112853-30339 resourceVersion:799 selfLink:/api/v1/namespaces/namespace-1544112853-30339/pods/valid-pod uid:f8f66bdc-f971-11e8-9434-0242ac110002] spec:map[containers:[map[terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I1206 16:14:14.420] has:map has no entry for key "missing"
W1206 16:14:14.521] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W1206 16:14:15.497] E1206 16:14:15.496818   68073 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I1206 16:14:15.598] Successful
I1206 16:14:15.598] message:NAME        READY   STATUS    RESTARTS   AGE
I1206 16:14:15.599] valid-pod   0/1     Pending   0          1s
I1206 16:14:15.599] has:STATUS
I1206 16:14:15.599] Successful
... skipping 80 lines ...
I1206 16:14:17.787]   terminationGracePeriodSeconds: 30
I1206 16:14:17.787] status:
I1206 16:14:17.787]   phase: Pending
I1206 16:14:17.787]   qosClass: Guaranteed
I1206 16:14:17.787] has:name: valid-pod
I1206 16:14:17.787] Successful
I1206 16:14:17.787] message:Error from server (NotFound): pods "invalid-pod" not found
I1206 16:14:17.787] has:"invalid-pod" not found
I1206 16:14:17.874] pod "valid-pod" deleted
I1206 16:14:17.983] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:18.158] pod/redis-master created
I1206 16:14:18.163] pod/valid-pod created
I1206 16:14:18.265] Successful
... skipping 305 lines ...
I1206 16:14:22.536] Running command: run_create_secret_tests
I1206 16:14:22.557] 
I1206 16:14:22.559] +++ Running case: test-cmd.run_create_secret_tests 
I1206 16:14:22.561] +++ working dir: /go/src/k8s.io/kubernetes
I1206 16:14:22.564] +++ command: run_create_secret_tests
I1206 16:14:22.663] Successful
I1206 16:14:22.663] message:Error from server (NotFound): secrets "mysecret" not found
I1206 16:14:22.664] has:secrets "mysecret" not found
W1206 16:14:22.764] I1206 16:14:21.692034   52130 clientconn.go:551] parsed scheme: ""
W1206 16:14:22.764] I1206 16:14:21.692074   52130 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1206 16:14:22.764] I1206 16:14:21.692158   52130 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1206 16:14:22.765] I1206 16:14:21.692261   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:14:22.765] I1206 16:14:21.692839   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:14:22.765] No resources found.
W1206 16:14:22.765] No resources found.
I1206 16:14:22.865] Successful
I1206 16:14:22.866] message:Error from server (NotFound): secrets "mysecret" not found
I1206 16:14:22.866] has:secrets "mysecret" not found
I1206 16:14:22.866] Successful
I1206 16:14:22.866] message:user-specified
I1206 16:14:22.866] has:user-specified
I1206 16:14:22.892] Successful
I1206 16:14:22.964] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"fe671224-f971-11e8-9434-0242ac110002","resourceVersion":"874","creationTimestamp":"2018-12-06T16:14:22Z"}}
... skipping 80 lines ...
I1206 16:14:24.874] has:Timeout exceeded while reading body
I1206 16:14:24.955] Successful
I1206 16:14:24.955] message:NAME        READY   STATUS    RESTARTS   AGE
I1206 16:14:24.955] valid-pod   0/1     Pending   0          1s
I1206 16:14:24.955] has:valid-pod
I1206 16:14:25.025] Successful
I1206 16:14:25.026] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1206 16:14:25.026] has:Invalid timeout value
I1206 16:14:25.104] pod "valid-pod" deleted
I1206 16:14:25.125] +++ exit code: 0
I1206 16:14:25.162] Recording: run_crd_tests
I1206 16:14:25.162] Running command: run_crd_tests
I1206 16:14:25.184] 
... skipping 167 lines ...
I1206 16:14:29.642] crd.sh:237: Successful get foos/test {{.patched}}: value1
I1206 16:14:29.732] foo.company.com/test patched
I1206 16:14:29.831] crd.sh:239: Successful get foos/test {{.patched}}: value2
I1206 16:14:29.926] foo.company.com/test patched
W1206 16:14:30.027] I1206 16:14:27.763227   52130 controller.go:608] quota admission added evaluator for: foos.company.com
I1206 16:14:30.128] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I1206 16:14:30.200] +++ [1206 16:14:30] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1206 16:14:30.268] {
I1206 16:14:30.268]     "apiVersion": "company.com/v1",
I1206 16:14:30.268]     "kind": "Foo",
I1206 16:14:30.268]     "metadata": {
I1206 16:14:30.269]         "annotations": {
I1206 16:14:30.269]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 112 lines ...
I1206 16:14:31.752] bar.company.com "test" deleted
W1206 16:14:31.853] I1206 16:14:31.496745   52130 controller.go:608] quota admission added evaluator for: bars.company.com
W1206 16:14:31.853] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 70570 Killed                  while [ ${tries} -lt 10 ]; do
W1206 16:14:31.854]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W1206 16:14:31.854] done
W1206 16:14:31.854] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 70569 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W1206 16:14:39.858] E1206 16:14:39.857085   55476 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos"]
W1206 16:14:40.048] I1206 16:14:40.048116   55476 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1206 16:14:40.050] I1206 16:14:40.050009   52130 clientconn.go:551] parsed scheme: ""
W1206 16:14:40.050] I1206 16:14:40.050051   52130 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1206 16:14:40.051] I1206 16:14:40.050094   52130 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1206 16:14:40.051] I1206 16:14:40.050177   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:14:40.051] I1206 16:14:40.050551   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 81 lines ...
I1206 16:14:52.184] +++ [1206 16:14:52] Testing cmd with image
I1206 16:14:52.280] Successful
I1206 16:14:52.281] message:deployment.apps/test1 created
I1206 16:14:52.281] has:deployment.apps/test1 created
I1206 16:14:52.361] deployment.extensions "test1" deleted
I1206 16:14:52.434] Successful
I1206 16:14:52.435] message:error: Invalid image name "InvalidImageName": invalid reference format
I1206 16:14:52.435] has:error: Invalid image name "InvalidImageName": invalid reference format
I1206 16:14:52.449] +++ exit code: 0
I1206 16:14:52.488] Recording: run_recursive_resources_tests
I1206 16:14:52.488] Running command: run_recursive_resources_tests
I1206 16:14:52.508] 
I1206 16:14:52.510] +++ Running case: test-cmd.run_recursive_resources_tests 
I1206 16:14:52.512] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I1206 16:14:52.664] Context "test" modified.
I1206 16:14:52.756] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:52.999] generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:53.001] Successful
I1206 16:14:53.002] message:pod/busybox0 created
I1206 16:14:53.002] pod/busybox1 created
I1206 16:14:53.002] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1206 16:14:53.002] has:error validating data: kind not set
I1206 16:14:53.091] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:53.278] generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1206 16:14:53.280] Successful
I1206 16:14:53.281] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:53.281] has:Object 'Kind' is missing
I1206 16:14:53.371] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:53.616] generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1206 16:14:53.618] Successful
I1206 16:14:53.618] message:pod/busybox0 replaced
I1206 16:14:53.619] pod/busybox1 replaced
I1206 16:14:53.619] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1206 16:14:53.619] has:error validating data: kind not set
I1206 16:14:53.709] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:53.804] Successful
I1206 16:14:53.804] message:Name:               busybox0
I1206 16:14:53.804] Namespace:          namespace-1544112892-10105
I1206 16:14:53.804] Priority:           0
I1206 16:14:53.804] PriorityClassName:  <none>
... skipping 159 lines ...
I1206 16:14:53.815] has:Object 'Kind' is missing
I1206 16:14:53.896] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:54.077] generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1206 16:14:54.079] Successful
I1206 16:14:54.079] message:pod/busybox0 annotated
I1206 16:14:54.079] pod/busybox1 annotated
I1206 16:14:54.080] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:54.080] has:Object 'Kind' is missing
I1206 16:14:54.172] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:54.435] generic-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1206 16:14:54.437] Successful
I1206 16:14:54.438] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1206 16:14:54.438] pod/busybox0 configured
I1206 16:14:54.438] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1206 16:14:54.438] pod/busybox1 configured
I1206 16:14:54.439] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1206 16:14:54.439] has:error validating data: kind not set
I1206 16:14:54.528] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:54.678] deployment.extensions/nginx created
I1206 16:14:54.780] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1206 16:14:54.869] generic-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 16:14:55.029] generic-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I1206 16:14:55.032] Successful
... skipping 42 lines ...
I1206 16:14:55.110] deployment.extensions "nginx" deleted
I1206 16:14:55.210] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:55.377] generic-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:55.379] Successful
I1206 16:14:55.380] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1206 16:14:55.380] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1206 16:14:55.380] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:55.380] has:Object 'Kind' is missing
I1206 16:14:55.474] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:55.555] Successful
I1206 16:14:55.556] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:55.556] has:busybox0:busybox1:
I1206 16:14:55.558] Successful
I1206 16:14:55.558] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:55.558] has:Object 'Kind' is missing
I1206 16:14:55.648] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:55.739] pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:55.833] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1206 16:14:55.835] Successful
I1206 16:14:55.835] message:pod/busybox0 labeled
I1206 16:14:55.835] pod/busybox1 labeled
I1206 16:14:55.836] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:55.836] has:Object 'Kind' is missing
I1206 16:14:55.926] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:56.012] pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:56.102] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1206 16:14:56.104] Successful
I1206 16:14:56.104] message:pod/busybox0 patched
I1206 16:14:56.105] pod/busybox1 patched
I1206 16:14:56.105] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:56.105] has:Object 'Kind' is missing
I1206 16:14:56.195] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:56.374] generic-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:56.376] Successful
I1206 16:14:56.377] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1206 16:14:56.377] pod "busybox0" force deleted
I1206 16:14:56.377] pod "busybox1" force deleted
I1206 16:14:56.377] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 16:14:56.377] has:Object 'Kind' is missing
I1206 16:14:56.463] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:56.611] replicationcontroller/busybox0 created
I1206 16:14:56.614] replicationcontroller/busybox1 created
I1206 16:14:56.711] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:56.805] generic-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:56.898] generic-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I1206 16:14:56.987] generic-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I1206 16:14:57.167] generic-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1206 16:14:57.256] generic-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1206 16:14:57.259] Successful
I1206 16:14:57.260] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1206 16:14:57.260] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1206 16:14:57.260] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:14:57.260] has:Object 'Kind' is missing
I1206 16:14:57.341] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1206 16:14:57.425] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1206 16:14:57.522] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:57.608] generic-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I1206 16:14:57.698] generic-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I1206 16:14:57.885] generic-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1206 16:14:57.980] generic-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1206 16:14:57.983] Successful
I1206 16:14:57.983] message:service/busybox0 exposed
I1206 16:14:57.983] service/busybox1 exposed
I1206 16:14:57.983] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:14:57.984] has:Object 'Kind' is missing
I1206 16:14:58.073] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:58.160] generic-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I1206 16:14:58.243] generic-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I1206 16:14:58.438] generic-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I1206 16:14:58.526] generic-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I1206 16:14:58.528] Successful
I1206 16:14:58.528] message:replicationcontroller/busybox0 scaled
I1206 16:14:58.529] replicationcontroller/busybox1 scaled
I1206 16:14:58.529] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:14:58.529] has:Object 'Kind' is missing
I1206 16:14:58.618] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 16:14:58.796] generic-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:58.798] Successful
I1206 16:14:58.799] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1206 16:14:58.799] replicationcontroller "busybox0" force deleted
I1206 16:14:58.799] replicationcontroller "busybox1" force deleted
I1206 16:14:58.799] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:14:58.800] has:Object 'Kind' is missing
I1206 16:14:58.890] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:14:59.035] deployment.extensions/nginx1-deployment created
I1206 16:14:59.039] deployment.extensions/nginx0-deployment created
I1206 16:14:59.142] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1206 16:14:59.234] generic-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1206 16:14:59.446] generic-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1206 16:14:59.448] Successful
I1206 16:14:59.448] message:deployment.extensions/nginx1-deployment skipped rollback (current template already matches revision 1)
I1206 16:14:59.448] deployment.extensions/nginx0-deployment skipped rollback (current template already matches revision 1)
I1206 16:14:59.449] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 16:14:59.449] has:Object 'Kind' is missing
I1206 16:14:59.535] deployment.extensions/nginx1-deployment paused
I1206 16:14:59.538] deployment.extensions/nginx0-deployment paused
I1206 16:14:59.635] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1206 16:14:59.637] Successful
I1206 16:14:59.638] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 16:14:59.638] has:Object 'Kind' is missing
I1206 16:14:59.724] deployment.extensions/nginx1-deployment resumed
I1206 16:14:59.728] deployment.extensions/nginx0-deployment resumed
I1206 16:14:59.829] generic-resources.sh:408: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I1206 16:14:59.832] Successful
I1206 16:14:59.832] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 16:14:59.832] has:Object 'Kind' is missing
W1206 16:14:59.933] Error from server (NotFound): namespaces "non-native-resources" not found
W1206 16:14:59.933] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1206 16:14:59.934] I1206 16:14:52.273181   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112892-183", Name:"test1", UID:"0fdf1453-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W1206 16:14:59.934] I1206 16:14:52.280216   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112892-183", Name:"test1-fb488bd5d", UID:"0fdfaad3-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"984", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-48s27
W1206 16:14:59.934] I1206 16:14:54.683055   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112892-10105", Name:"nginx", UID:"114ead41-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1009", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W1206 16:14:59.935] I1206 16:14:54.686123   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112892-10105", Name:"nginx-6f6bb85d9c", UID:"114f5134-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1010", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-65bfv
W1206 16:14:59.935] I1206 16:14:54.689256   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112892-10105", Name:"nginx-6f6bb85d9c", UID:"114f5134-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1010", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-ncd8z
W1206 16:14:59.935] I1206 16:14:54.689567   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112892-10105", Name:"nginx-6f6bb85d9c", UID:"114f5134-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1010", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-dj9pj
W1206 16:14:59.935] kubectl convert is DEPRECATED and will be removed in a future version.
W1206 16:14:59.936] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1206 16:14:59.936] I1206 16:14:56.342476   55476 namespace_controller.go:171] Namespace has been deleted non-native-resources
W1206 16:14:59.936] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1206 16:14:59.936] I1206 16:14:56.614721   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112892-10105", Name:"busybox0", UID:"127583d4-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1039", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-8tbxf
W1206 16:14:59.937] I1206 16:14:56.617314   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112892-10105", Name:"busybox1", UID:"12763e84-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1041", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-45m9c
W1206 16:14:59.937] I1206 16:14:58.338288   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112892-10105", Name:"busybox0", UID:"127583d4-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1061", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-rbwtv
W1206 16:14:59.937] I1206 16:14:58.348339   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112892-10105", Name:"busybox1", UID:"12763e84-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-r8nht
W1206 16:14:59.937] I1206 16:14:59.038298   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112892-10105", Name:"nginx1-deployment", UID:"13e75577-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1082", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W1206 16:14:59.938] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1206 16:14:59.938] I1206 16:14:59.043058   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112892-10105", Name:"nginx1-deployment-75f6fc6747", UID:"13e7f1de-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1083", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-8r6gq
W1206 16:14:59.938] I1206 16:14:59.043347   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112892-10105", Name:"nginx0-deployment", UID:"13e817c1-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1084", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W1206 16:14:59.939] I1206 16:14:59.046067   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112892-10105", Name:"nginx0-deployment-b6bb4ccbb", UID:"13e8c578-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-fmclx
W1206 16:14:59.939] I1206 16:14:59.047875   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112892-10105", Name:"nginx1-deployment-75f6fc6747", UID:"13e7f1de-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1083", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-cscn7
W1206 16:14:59.939] I1206 16:14:59.048972   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112892-10105", Name:"nginx0-deployment-b6bb4ccbb", UID:"13e8c578-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-2r8cs
W1206 16:15:00.020] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1206 16:15:00.036] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 16:15:00.137] Successful
I1206 16:15:00.137] message:deployment.extensions/nginx1-deployment 
I1206 16:15:00.137] REVISION  CHANGE-CAUSE
I1206 16:15:00.138] 1         <none>
I1206 16:15:00.138] 
I1206 16:15:00.138] deployment.extensions/nginx0-deployment 
I1206 16:15:00.138] REVISION  CHANGE-CAUSE
I1206 16:15:00.138] 1         <none>
I1206 16:15:00.138] 
I1206 16:15:00.138] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 16:15:00.138] has:nginx0-deployment
I1206 16:15:00.138] Successful
I1206 16:15:00.138] message:deployment.extensions/nginx1-deployment 
I1206 16:15:00.139] REVISION  CHANGE-CAUSE
I1206 16:15:00.139] 1         <none>
I1206 16:15:00.139] 
I1206 16:15:00.139] deployment.extensions/nginx0-deployment 
I1206 16:15:00.139] REVISION  CHANGE-CAUSE
I1206 16:15:00.139] 1         <none>
I1206 16:15:00.139] 
I1206 16:15:00.139] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 16:15:00.139] has:nginx1-deployment
I1206 16:15:00.139] Successful
I1206 16:15:00.140] message:deployment.extensions/nginx1-deployment 
I1206 16:15:00.140] REVISION  CHANGE-CAUSE
I1206 16:15:00.140] 1         <none>
I1206 16:15:00.140] 
I1206 16:15:00.140] deployment.extensions/nginx0-deployment 
I1206 16:15:00.140] REVISION  CHANGE-CAUSE
I1206 16:15:00.140] 1         <none>
I1206 16:15:00.140] 
I1206 16:15:00.140] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 16:15:00.140] has:Object 'Kind' is missing
I1206 16:15:00.140] deployment.extensions "nginx1-deployment" force deleted
I1206 16:15:00.140] deployment.extensions "nginx0-deployment" force deleted
I1206 16:15:01.131] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:15:01.278] replicationcontroller/busybox0 created
I1206 16:15:01.282] replicationcontroller/busybox1 created
... skipping 7 lines ...
I1206 16:15:01.471] message:no rollbacker has been implemented for "ReplicationController"
I1206 16:15:01.471] no rollbacker has been implemented for "ReplicationController"
I1206 16:15:01.471] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:15:01.472] has:Object 'Kind' is missing
I1206 16:15:01.561] Successful
I1206 16:15:01.561] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:15:01.562] error: replicationcontrollers "busybox0" pausing is not supported
I1206 16:15:01.562] error: replicationcontrollers "busybox1" pausing is not supported
I1206 16:15:01.562] has:Object 'Kind' is missing
I1206 16:15:01.563] Successful
I1206 16:15:01.564] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:15:01.564] error: replicationcontrollers "busybox0" pausing is not supported
I1206 16:15:01.564] error: replicationcontrollers "busybox1" pausing is not supported
I1206 16:15:01.564] has:replicationcontrollers "busybox0" pausing is not supported
I1206 16:15:01.566] Successful
I1206 16:15:01.566] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:15:01.566] error: replicationcontrollers "busybox0" pausing is not supported
I1206 16:15:01.566] error: replicationcontrollers "busybox1" pausing is not supported
I1206 16:15:01.567] has:replicationcontrollers "busybox1" pausing is not supported
I1206 16:15:01.654] Successful
I1206 16:15:01.655] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:15:01.655] error: replicationcontrollers "busybox0" resuming is not supported
I1206 16:15:01.655] error: replicationcontrollers "busybox1" resuming is not supported
I1206 16:15:01.655] has:Object 'Kind' is missing
I1206 16:15:01.657] Successful
I1206 16:15:01.657] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:15:01.657] error: replicationcontrollers "busybox0" resuming is not supported
I1206 16:15:01.657] error: replicationcontrollers "busybox1" resuming is not supported
I1206 16:15:01.658] has:replicationcontrollers "busybox0" resuming is not supported
I1206 16:15:01.659] Successful
I1206 16:15:01.660] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:15:01.660] error: replicationcontrollers "busybox0" resuming is not supported
I1206 16:15:01.660] error: replicationcontrollers "busybox1" resuming is not supported
I1206 16:15:01.660] has:replicationcontrollers "busybox0" resuming is not supported
I1206 16:15:01.733] replicationcontroller "busybox0" force deleted
I1206 16:15:01.738] replicationcontroller "busybox1" force deleted
W1206 16:15:01.839] I1206 16:15:01.281886   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112892-10105", Name:"busybox0", UID:"153dac22-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1127", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-8nmp7
W1206 16:15:01.839] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1206 16:15:01.840] I1206 16:15:01.284938   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112892-10105", Name:"busybox1", UID:"153e79b6-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1129", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-wrr9j
W1206 16:15:01.840] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1206 16:15:01.840] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 16:15:02.760] +++ exit code: 0
I1206 16:15:02.818] Recording: run_namespace_tests
I1206 16:15:02.818] Running command: run_namespace_tests
I1206 16:15:02.841] 
I1206 16:15:02.843] +++ Running case: test-cmd.run_namespace_tests 
I1206 16:15:02.845] +++ working dir: /go/src/k8s.io/kubernetes
I1206 16:15:02.848] +++ command: run_namespace_tests
I1206 16:15:02.858] +++ [1206 16:15:02] Testing kubectl(v1:namespaces)
I1206 16:15:02.928] namespace/my-namespace created
I1206 16:15:03.023] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1206 16:15:03.097] namespace "my-namespace" deleted
I1206 16:15:08.220] namespace/my-namespace condition met
I1206 16:15:08.306] Successful
I1206 16:15:08.306] message:Error from server (NotFound): namespaces "my-namespace" not found
I1206 16:15:08.307] has: not found
I1206 16:15:08.416] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I1206 16:15:08.487] namespace/other created
I1206 16:15:08.575] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I1206 16:15:08.661] core.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:15:08.805] pod/valid-pod created
I1206 16:15:08.900] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 16:15:08.985] core.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 16:15:09.063] Successful
I1206 16:15:09.063] message:error: a resource cannot be retrieved by name across all namespaces
I1206 16:15:09.063] has:a resource cannot be retrieved by name across all namespaces
I1206 16:15:09.151] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 16:15:09.229] pod "valid-pod" force deleted
I1206 16:15:09.321] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:15:09.398] namespace "other" deleted
W1206 16:15:09.499] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1206 16:15:09.863] E1206 16:15:09.863188   55476 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1206 16:15:10.175] I1206 16:15:10.174854   55476 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1206 16:15:10.276] I1206 16:15:10.275315   55476 controller_utils.go:1034] Caches are synced for garbage collector controller
W1206 16:15:12.071] I1206 16:15:12.070607   55476 horizontal.go:309] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1544112892-10105
W1206 16:15:12.076] I1206 16:15:12.075554   55476 horizontal.go:309] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1544112892-10105
W1206 16:15:13.218] I1206 16:15:13.217103   55476 namespace_controller.go:171] Namespace has been deleted my-namespace
I1206 16:15:14.598] +++ exit code: 0
... skipping 113 lines ...
I1206 16:15:30.200] +++ command: run_client_config_tests
I1206 16:15:30.213] +++ [1206 16:15:30] Creating namespace namespace-1544112930-31669
I1206 16:15:30.283] namespace/namespace-1544112930-31669 created
I1206 16:15:30.351] Context "test" modified.
I1206 16:15:30.357] +++ [1206 16:15:30] Testing client config
I1206 16:15:30.423] Successful
I1206 16:15:30.424] message:error: stat missing: no such file or directory
I1206 16:15:30.424] has:missing: no such file or directory
I1206 16:15:30.490] Successful
I1206 16:15:30.490] message:error: stat missing: no such file or directory
I1206 16:15:30.490] has:missing: no such file or directory
I1206 16:15:30.556] Successful
I1206 16:15:30.557] message:error: stat missing: no such file or directory
I1206 16:15:30.557] has:missing: no such file or directory
I1206 16:15:30.629] Successful
I1206 16:15:30.629] message:Error in configuration: context was not found for specified context: missing-context
I1206 16:15:30.629] has:context was not found for specified context: missing-context
I1206 16:15:30.702] Successful
I1206 16:15:30.702] message:error: no server found for cluster "missing-cluster"
I1206 16:15:30.702] has:no server found for cluster "missing-cluster"
I1206 16:15:30.780] Successful
I1206 16:15:30.780] message:error: auth info "missing-user" does not exist
I1206 16:15:30.780] has:auth info "missing-user" does not exist
I1206 16:15:30.926] Successful
I1206 16:15:30.926] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1206 16:15:30.927] has:Error loading config file
I1206 16:15:31.001] Successful
I1206 16:15:31.001] message:error: stat missing-config: no such file or directory
I1206 16:15:31.002] has:no such file or directory
I1206 16:15:31.016] +++ exit code: 0
I1206 16:15:31.055] Recording: run_service_accounts_tests
I1206 16:15:31.056] Running command: run_service_accounts_tests
I1206 16:15:31.079] 
I1206 16:15:31.082] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 76 lines ...
I1206 16:15:38.370]                 job-name=test-job
I1206 16:15:38.370]                 run=pi
I1206 16:15:38.370] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1206 16:15:38.370] Parallelism:    1
I1206 16:15:38.370] Completions:    1
I1206 16:15:38.370] Start Time:     Thu, 06 Dec 2018 16:15:38 +0000
I1206 16:15:38.370] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1206 16:15:38.371] Pod Template:
I1206 16:15:38.371]   Labels:  controller-uid=2b326a9f-f972-11e8-9434-0242ac110002
I1206 16:15:38.371]            job-name=test-job
I1206 16:15:38.371]            run=pi
I1206 16:15:38.371]   Containers:
I1206 16:15:38.371]    pi:
... skipping 331 lines ...
I1206 16:15:47.887]   selector:
I1206 16:15:47.887]     role: padawan
I1206 16:15:47.887]   sessionAffinity: None
I1206 16:15:47.887]   type: ClusterIP
I1206 16:15:47.887] status:
I1206 16:15:47.887]   loadBalancer: {}
W1206 16:15:47.988] error: you must specify resources by --filename when --local is set.
W1206 16:15:47.988] Example resource specifications include:
W1206 16:15:47.988]    '-f rsrc.yaml'
W1206 16:15:47.988]    '--filename=rsrc.json'
I1206 16:15:48.089] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1206 16:15:48.208] core.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1206 16:15:48.290] service "redis-master" deleted
... skipping 93 lines ...
I1206 16:15:53.997] apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 16:15:54.086] apps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1206 16:15:54.194] daemonset.extensions/bind rolled back
I1206 16:15:54.289] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1206 16:15:54.382] apps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1206 16:15:54.487] Successful
I1206 16:15:54.488] message:error: unable to find specified revision 1000000 in history
I1206 16:15:54.488] has:unable to find specified revision
I1206 16:15:54.575] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1206 16:15:54.666] apps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1206 16:15:54.772] daemonset.extensions/bind rolled back
I1206 16:15:54.866] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1206 16:15:54.957] apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I1206 16:15:56.259] Namespace:    namespace-1544112955-25868
I1206 16:15:56.259] Selector:     app=guestbook,tier=frontend
I1206 16:15:56.259] Labels:       app=guestbook
I1206 16:15:56.259]               tier=frontend
I1206 16:15:56.260] Annotations:  <none>
I1206 16:15:56.260] Replicas:     3 current / 3 desired
I1206 16:15:56.260] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:15:56.260] Pod Template:
I1206 16:15:56.260]   Labels:  app=guestbook
I1206 16:15:56.260]            tier=frontend
I1206 16:15:56.260]   Containers:
I1206 16:15:56.260]    php-redis:
I1206 16:15:56.260]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1206 16:15:56.373] Namespace:    namespace-1544112955-25868
I1206 16:15:56.374] Selector:     app=guestbook,tier=frontend
I1206 16:15:56.374] Labels:       app=guestbook
I1206 16:15:56.374]               tier=frontend
I1206 16:15:56.374] Annotations:  <none>
I1206 16:15:56.374] Replicas:     3 current / 3 desired
I1206 16:15:56.374] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:15:56.374] Pod Template:
I1206 16:15:56.374]   Labels:  app=guestbook
I1206 16:15:56.374]            tier=frontend
I1206 16:15:56.374]   Containers:
I1206 16:15:56.375]    php-redis:
I1206 16:15:56.375]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1206 16:15:56.480] Namespace:    namespace-1544112955-25868
I1206 16:15:56.480] Selector:     app=guestbook,tier=frontend
I1206 16:15:56.480] Labels:       app=guestbook
I1206 16:15:56.480]               tier=frontend
I1206 16:15:56.480] Annotations:  <none>
I1206 16:15:56.480] Replicas:     3 current / 3 desired
I1206 16:15:56.481] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:15:56.481] Pod Template:
I1206 16:15:56.481]   Labels:  app=guestbook
I1206 16:15:56.481]            tier=frontend
I1206 16:15:56.481]   Containers:
I1206 16:15:56.481]    php-redis:
I1206 16:15:56.481]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 4 lines ...
I1206 16:15:56.482]       memory:  100Mi
I1206 16:15:56.482]     Environment:
I1206 16:15:56.482]       GET_HOSTS_FROM:  dns
I1206 16:15:56.482]     Mounts:            <none>
I1206 16:15:56.482]   Volumes:             <none>
I1206 16:15:56.482] 
W1206 16:15:56.585] E1206 16:15:54.203612   55476 daemon_controller.go:303] namespace-1544112952-13204/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1544112952-13204", SelfLink:"/apis/apps/v1/namespaces/namespace-1544112952-13204/daemonsets/bind", UID:"33f76199-f972-11e8-9434-0242ac110002", ResourceVersion:"1345", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63679709752, loc:(*time.Location)(0x66fa920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1544112952-13204\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0045fc260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00454f778), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004508900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0045fc2a0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0026ca608)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00454f7f0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1206 16:15:56.588] E1206 16:15:54.782337   55476 daemon_controller.go:303] namespace-1544112952-13204/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1544112952-13204", SelfLink:"/apis/apps/v1/namespaces/namespace-1544112952-13204/daemonsets/bind", UID:"33f76199-f972-11e8-9434-0242ac110002", ResourceVersion:"1348", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63679709752, loc:(*time.Location)(0x66fa920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1544112952-13204\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00339fa80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00360ba38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003de4840), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc00339fae0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000415620)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00360bab0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1206 16:15:56.589] I1206 16:15:55.605521   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"359e6dcb-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-t7f9j
W1206 16:15:56.589] I1206 16:15:55.608508   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"359e6dcb-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fjd25
W1206 16:15:56.589] I1206 16:15:55.609015   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"359e6dcb-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5djq6
W1206 16:15:56.590] I1206 16:15:56.023657   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"35dea5ae-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7lmmd
W1206 16:15:56.590] I1206 16:15:56.026236   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"35dea5ae-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dlhpk
W1206 16:15:56.590] I1206 16:15:56.026831   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"35dea5ae-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rjgsv
... skipping 2 lines ...
I1206 16:15:56.691] Namespace:    namespace-1544112955-25868
I1206 16:15:56.691] Selector:     app=guestbook,tier=frontend
I1206 16:15:56.691] Labels:       app=guestbook
I1206 16:15:56.691]               tier=frontend
I1206 16:15:56.691] Annotations:  <none>
I1206 16:15:56.691] Replicas:     3 current / 3 desired
I1206 16:15:56.691] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:15:56.691] Pod Template:
I1206 16:15:56.691]   Labels:  app=guestbook
I1206 16:15:56.691]            tier=frontend
I1206 16:15:56.691]   Containers:
I1206 16:15:56.692]    php-redis:
I1206 16:15:56.692]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1206 16:15:56.730] Namespace:    namespace-1544112955-25868
I1206 16:15:56.730] Selector:     app=guestbook,tier=frontend
I1206 16:15:56.730] Labels:       app=guestbook
I1206 16:15:56.730]               tier=frontend
I1206 16:15:56.730] Annotations:  <none>
I1206 16:15:56.730] Replicas:     3 current / 3 desired
I1206 16:15:56.731] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:15:56.731] Pod Template:
I1206 16:15:56.731]   Labels:  app=guestbook
I1206 16:15:56.731]            tier=frontend
I1206 16:15:56.731]   Containers:
I1206 16:15:56.731]    php-redis:
I1206 16:15:56.731]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1206 16:15:56.836] Namespace:    namespace-1544112955-25868
I1206 16:15:56.836] Selector:     app=guestbook,tier=frontend
I1206 16:15:56.836] Labels:       app=guestbook
I1206 16:15:56.837]               tier=frontend
I1206 16:15:56.837] Annotations:  <none>
I1206 16:15:56.837] Replicas:     3 current / 3 desired
I1206 16:15:56.837] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:15:56.837] Pod Template:
I1206 16:15:56.837]   Labels:  app=guestbook
I1206 16:15:56.837]            tier=frontend
I1206 16:15:56.837]   Containers:
I1206 16:15:56.837]    php-redis:
I1206 16:15:56.837]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1206 16:15:56.937] Namespace:    namespace-1544112955-25868
I1206 16:15:56.937] Selector:     app=guestbook,tier=frontend
I1206 16:15:56.937] Labels:       app=guestbook
I1206 16:15:56.937]               tier=frontend
I1206 16:15:56.937] Annotations:  <none>
I1206 16:15:56.937] Replicas:     3 current / 3 desired
I1206 16:15:56.938] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:15:56.938] Pod Template:
I1206 16:15:56.938]   Labels:  app=guestbook
I1206 16:15:56.938]            tier=frontend
I1206 16:15:56.938]   Containers:
I1206 16:15:56.938]    php-redis:
I1206 16:15:56.938]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1206 16:15:57.041] Namespace:    namespace-1544112955-25868
I1206 16:15:57.042] Selector:     app=guestbook,tier=frontend
I1206 16:15:57.042] Labels:       app=guestbook
I1206 16:15:57.042]               tier=frontend
I1206 16:15:57.042] Annotations:  <none>
I1206 16:15:57.042] Replicas:     3 current / 3 desired
I1206 16:15:57.042] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:15:57.042] Pod Template:
I1206 16:15:57.042]   Labels:  app=guestbook
I1206 16:15:57.042]            tier=frontend
I1206 16:15:57.042]   Containers:
I1206 16:15:57.042]    php-redis:
I1206 16:15:57.042]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I1206 16:15:57.837] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I1206 16:15:57.924] core.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I1206 16:15:58.011] replicationcontroller/frontend scaled
I1206 16:15:58.104] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I1206 16:15:58.181] replicationcontroller "frontend" deleted
W1206 16:15:58.282] I1206 16:15:57.222883   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"35dea5ae-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7lmmd
W1206 16:15:58.282] error: Expected replicas to be 3, was 2
W1206 16:15:58.282] I1206 16:15:57.749759   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"35dea5ae-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nw6t9
W1206 16:15:58.283] I1206 16:15:58.017702   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"35dea5ae-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-nw6t9
W1206 16:15:58.333] I1206 16:15:58.332403   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"redis-master", UID:"373ef1ba-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-h2c9k
I1206 16:15:58.433] replicationcontroller/redis-master created
I1206 16:15:58.491] replicationcontroller/redis-slave created
W1206 16:15:58.592] I1206 16:15:58.494583   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"redis-slave", UID:"3757b80b-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1411", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-flr4h
... skipping 36 lines ...
I1206 16:16:00.047] service "expose-test-deployment" deleted
I1206 16:16:00.144] Successful
I1206 16:16:00.144] message:service/expose-test-deployment exposed
I1206 16:16:00.144] has:service/expose-test-deployment exposed
I1206 16:16:00.224] service "expose-test-deployment" deleted
I1206 16:16:00.314] Successful
I1206 16:16:00.314] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1206 16:16:00.314] See 'kubectl expose -h' for help and examples
I1206 16:16:00.314] has:invalid deployment: no selectors
I1206 16:16:00.396] Successful
I1206 16:16:00.396] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1206 16:16:00.397] See 'kubectl expose -h' for help and examples
I1206 16:16:00.397] has:invalid deployment: no selectors
I1206 16:16:00.538] deployment.extensions/nginx-deployment created
I1206 16:16:00.633] core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I1206 16:16:00.718] service/nginx-deployment exposed
I1206 16:16:00.810] core.sh:1137: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
... skipping 23 lines ...
I1206 16:16:02.364] service "frontend" deleted
I1206 16:16:02.371] service "frontend-2" deleted
I1206 16:16:02.378] service "frontend-3" deleted
I1206 16:16:02.385] service "frontend-4" deleted
I1206 16:16:02.393] service "frontend-5" deleted
I1206 16:16:02.487] Successful
I1206 16:16:02.487] message:error: cannot expose a Node
I1206 16:16:02.487] has:cannot expose
I1206 16:16:02.576] Successful
I1206 16:16:02.577] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1206 16:16:02.577] has:metadata.name: Invalid value
I1206 16:16:02.667] Successful
I1206 16:16:02.667] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I1206 16:16:04.803] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1206 16:16:04.895] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1206 16:16:04.973] horizontalpodautoscaler.autoscaling "frontend" deleted
W1206 16:16:05.074] I1206 16:16:04.363398   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"3ad722f0-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-c9nsm
W1206 16:16:05.075] I1206 16:16:04.365699   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"3ad722f0-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6ts7f
W1206 16:16:05.075] I1206 16:16:04.365910   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544112955-25868", Name:"frontend", UID:"3ad722f0-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hczpk
W1206 16:16:05.075] Error: required flag(s) "max" not set
W1206 16:16:05.075] 
W1206 16:16:05.075] 
W1206 16:16:05.075] Examples:
W1206 16:16:05.075]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1206 16:16:05.075]   kubectl autoscale deployment foo --min=2 --max=10
W1206 16:16:05.076]   
... skipping 54 lines ...
I1206 16:16:05.291]           limits:
I1206 16:16:05.291]             cpu: 300m
I1206 16:16:05.291]           requests:
I1206 16:16:05.292]             cpu: 300m
I1206 16:16:05.292]       terminationGracePeriodSeconds: 0
I1206 16:16:05.292] status: {}
W1206 16:16:05.392] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I1206 16:16:05.520] deployment.extensions/nginx-deployment-resources created
I1206 16:16:05.617] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1206 16:16:05.707] core.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 16:16:05.798] core.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1206 16:16:05.884] deployment.extensions/nginx-deployment-resources resource requirements updated
I1206 16:16:05.983] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 81 lines ...
W1206 16:16:06.987] I1206 16:16:05.529664   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-69c96fd869", UID:"3b88dd50-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-x2x5f
W1206 16:16:06.987] I1206 16:16:05.887633   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources", UID:"3b884198-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W1206 16:16:06.987] I1206 16:16:05.890862   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-6c5996c457", UID:"3bc05772-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-swfl6
W1206 16:16:06.987] I1206 16:16:05.893402   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources", UID:"3b884198-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W1206 16:16:06.988] I1206 16:16:05.898773   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-69c96fd869", UID:"3b88dd50-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-8s2ff
W1206 16:16:06.988] I1206 16:16:05.898830   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources", UID:"3b884198-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 2
W1206 16:16:06.988] E1206 16:16:05.901362   55476 replica_set.go:450] Sync "namespace-1544112955-25868/nginx-deployment-resources-6c5996c457" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-resources-6c5996c457": the object has been modified; please apply your changes to the latest version and try again
W1206 16:16:06.988] I1206 16:16:05.905204   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-6c5996c457", UID:"3bc05772-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1675", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-b76gz
W1206 16:16:06.988] error: unable to find container named redis
W1206 16:16:06.989] I1206 16:16:06.255503   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources", UID:"3b884198-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1690", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 0
W1206 16:16:06.989] I1206 16:16:06.260552   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-69c96fd869", UID:"3b88dd50-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-x2x5f
W1206 16:16:06.989] I1206 16:16:06.260589   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-69c96fd869", UID:"3b88dd50-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-dpkzr
W1206 16:16:06.989] I1206 16:16:06.262741   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources", UID:"3b884198-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 2
W1206 16:16:06.990] I1206 16:16:06.265746   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-5f4579485f", UID:"3bf7898b-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-v9l68
W1206 16:16:06.990] I1206 16:16:06.267984   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-5f4579485f", UID:"3bf7898b-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-k4nl8
W1206 16:16:06.990] I1206 16:16:06.530233   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources", UID:"3b884198-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1714", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-6c5996c457 to 0
W1206 16:16:06.991] I1206 16:16:06.536083   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources", UID:"3b884198-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1716", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 2
W1206 16:16:06.991] I1206 16:16:06.730997   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-6c5996c457", UID:"3bc05772-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1717", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-b76gz
W1206 16:16:06.991] I1206 16:16:06.780385   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-6c5996c457", UID:"3bc05772-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1717", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-swfl6
W1206 16:16:06.991] error: you must specify resources by --filename when --local is set.
W1206 16:16:06.991] Example resource specifications include:
W1206 16:16:06.991]    '-f rsrc.yaml'
W1206 16:16:06.991]    '--filename=rsrc.json'
I1206 16:16:07.092] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1206 16:16:07.128] core.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1206 16:16:07.218] core.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 13 lines ...
I1206 16:16:07.705] apps.sh:184: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx
I1206 16:16:07.782] Successful
I1206 16:16:07.783] message:10
I1206 16:16:07.783] has not:2
W1206 16:16:07.883] I1206 16:16:07.127982   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-ff8d89cb6", UID:"3c21531b-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1719", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-m4gd4
W1206 16:16:07.884] I1206 16:16:07.228392   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112955-25868", Name:"nginx-deployment-resources-ff8d89cb6", UID:"3c21531b-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1719", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-hnf4d
W1206 16:16:07.884] E1206 16:16:07.376577   55476 replica_set.go:450] Sync "namespace-1544112955-25868/nginx-deployment-resources-ff8d89cb6" failed with replicasets.apps "nginx-deployment-resources-ff8d89cb6" not found
W1206 16:16:07.885] I1206 16:16:07.615840   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"test-nginx-extensions", UID:"3cc78cc9-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1756", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5b89c6c69f to 1
W1206 16:16:07.885] I1206 16:16:07.620926   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"test-nginx-extensions-5b89c6c69f", UID:"3cc8198a-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5b89c6c69f-tpjmx
I1206 16:16:07.985] Successful
I1206 16:16:07.986] message:extensions/v1beta1
I1206 16:16:07.986] has:extensions/v1beta1
I1206 16:16:08.000] Successful
... skipping 21 lines ...
I1206 16:16:08.854]                 pod-template-hash=55c9b846cc
I1206 16:16:08.855] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1206 16:16:08.855]                 deployment.kubernetes.io/max-replicas: 2
I1206 16:16:08.855]                 deployment.kubernetes.io/revision: 1
I1206 16:16:08.855] Controlled By:  Deployment/test-nginx-apps
I1206 16:16:08.855] Replicas:       1 current / 1 desired
I1206 16:16:08.856] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:08.856] Pod Template:
I1206 16:16:08.856]   Labels:  app=test-nginx-apps
I1206 16:16:08.856]            pod-template-hash=55c9b846cc
I1206 16:16:08.856]   Containers:
I1206 16:16:08.856]    nginx:
I1206 16:16:08.857]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
W1206 16:16:13.782] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W1206 16:16:13.782] I1206 16:16:13.293743   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx", UID:"3fda87c2-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1892", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 1
W1206 16:16:13.782] I1206 16:16:13.297029   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-9486b7cb7", UID:"402a7e93-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1893", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-cjxmh
W1206 16:16:13.783] I1206 16:16:13.300590   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx", UID:"3fda87c2-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1892", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-6f6bb85d9c to 2
W1206 16:16:13.783] I1206 16:16:13.305486   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-6f6bb85d9c", UID:"3fdb1ad9-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1898", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-6f6bb85d9c-qdr8q
W1206 16:16:13.783] I1206 16:16:13.306156   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx", UID:"3fda87c2-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1896", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 2
W1206 16:16:13.784] E1206 16:16:13.306810   55476 replica_set.go:450] Sync "namespace-1544112967-8233/nginx-9486b7cb7" failed with Operation cannot be fulfilled on replicasets.apps "nginx-9486b7cb7": the object has been modified; please apply your changes to the latest version and try again
W1206 16:16:13.784] I1206 16:16:13.309347   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-9486b7cb7", UID:"402a7e93-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1903", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-r479s
I1206 16:16:14.774] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 16:16:14.962] apps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 16:16:15.068] deployment.extensions/nginx rolled back
W1206 16:16:15.169] error: unable to find specified revision 1000000 in history
I1206 16:16:16.165] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1206 16:16:16.259] deployment.extensions/nginx paused
W1206 16:16:16.367] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I1206 16:16:16.468] deployment.extensions/nginx resumed
I1206 16:16:16.570] deployment.extensions/nginx rolled back
I1206 16:16:16.751]     deployment.kubernetes.io/revision-history: 1,3
W1206 16:16:16.934] error: desired revision (3) is different from the running revision (5)
I1206 16:16:17.078] deployment.extensions/nginx2 created
I1206 16:16:17.160] deployment.extensions "nginx2" deleted
I1206 16:16:17.244] deployment.extensions "nginx" deleted
I1206 16:16:17.339] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:16:17.488] deployment.extensions/nginx-deployment created
I1206 16:16:17.583] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 27 lines ...
W1206 16:16:19.841] I1206 16:16:17.496229   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-646d4f779d", UID:"42aafdfc-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1968", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-hlbzd
W1206 16:16:19.842] I1206 16:16:17.496294   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-646d4f779d", UID:"42aafdfc-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1968", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-vvsgp
W1206 16:16:19.842] I1206 16:16:17.843909   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"42aa7222-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1982", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W1206 16:16:19.842] I1206 16:16:17.847002   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-85db47bbdb", UID:"42e0d2a0-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-fz76n
W1206 16:16:19.842] I1206 16:16:17.850129   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"42aa7222-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1982", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1206 16:16:19.843] I1206 16:16:17.854174   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-646d4f779d", UID:"42aafdfc-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-gk2rc
W1206 16:16:19.843] E1206 16:16:17.856770   55476 replica_set.go:450] Sync "namespace-1544112967-8233/nginx-deployment-85db47bbdb" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-85db47bbdb": the object has been modified; please apply your changes to the latest version and try again
W1206 16:16:19.843] I1206 16:16:17.856940   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"42aa7222-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1985", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 2
W1206 16:16:19.843] I1206 16:16:17.866810   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-85db47bbdb", UID:"42e0d2a0-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1999", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-m75jg
W1206 16:16:19.844] error: unable to find container named "redis"
W1206 16:16:19.844] I1206 16:16:18.981021   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"42aa7222-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2016", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1206 16:16:19.844] I1206 16:16:18.985390   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-646d4f779d", UID:"42aafdfc-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-vvsgp
W1206 16:16:19.844] I1206 16:16:18.985431   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-646d4f779d", UID:"42aafdfc-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-hlbzd
W1206 16:16:19.845] I1206 16:16:18.987135   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"42aa7222-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 2
W1206 16:16:19.845] I1206 16:16:18.990629   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-dc756cc6", UID:"438d2c4b-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-8cwp9
W1206 16:16:19.845] I1206 16:16:18.992628   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-dc756cc6", UID:"438d2c4b-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-95m5m
... skipping 15 lines ...
I1206 16:16:20.870] deployment.extensions/nginx-deployment env updated
I1206 16:16:20.962] deployment.extensions/nginx-deployment env updated
W1206 16:16:21.063] I1206 16:16:20.415739   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"4401da49-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2070", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 1
W1206 16:16:21.063] I1206 16:16:20.419262   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-5b795689cd", UID:"44692a95-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-tb7gd
W1206 16:16:21.063] I1206 16:16:20.422232   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"4401da49-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2070", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1206 16:16:21.064] I1206 16:16:20.426120   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-646d4f779d", UID:"440274b8-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-qbkjp
W1206 16:16:21.064] E1206 16:16:20.427636   55476 replica_set.go:450] Sync "namespace-1544112967-8233/nginx-deployment-5b795689cd" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5b795689cd": the object has been modified; please apply your changes to the latest version and try again
W1206 16:16:21.065] I1206 16:16:20.427936   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"4401da49-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 2
W1206 16:16:21.065] I1206 16:16:20.430863   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-5b795689cd", UID:"44692a95-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2081", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-nws6w
W1206 16:16:21.065] I1206 16:16:20.697230   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"4401da49-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2094", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1206 16:16:21.066] I1206 16:16:20.701163   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-646d4f779d", UID:"440274b8-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2098", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-cmbz9
W1206 16:16:21.066] I1206 16:16:20.701364   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-646d4f779d", UID:"440274b8-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2098", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-k8ml5
W1206 16:16:21.066] I1206 16:16:20.703913   55476 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment", UID:"4401da49-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2096", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5766b7c95b to 2
... skipping 48 lines ...
I1206 16:16:23.435] Namespace:    namespace-1544112981-7912
I1206 16:16:23.435] Selector:     app=guestbook,tier=frontend
I1206 16:16:23.435] Labels:       app=guestbook
I1206 16:16:23.435]               tier=frontend
I1206 16:16:23.435] Annotations:  <none>
I1206 16:16:23.435] Replicas:     3 current / 3 desired
I1206 16:16:23.435] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:23.435] Pod Template:
I1206 16:16:23.436]   Labels:  app=guestbook
I1206 16:16:23.436]            tier=frontend
I1206 16:16:23.436]   Containers:
I1206 16:16:23.436]    php-redis:
I1206 16:16:23.436]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1206 16:16:23.545] Namespace:    namespace-1544112981-7912
I1206 16:16:23.545] Selector:     app=guestbook,tier=frontend
I1206 16:16:23.545] Labels:       app=guestbook
I1206 16:16:23.545]               tier=frontend
I1206 16:16:23.546] Annotations:  <none>
I1206 16:16:23.546] Replicas:     3 current / 3 desired
I1206 16:16:23.546] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:23.546] Pod Template:
I1206 16:16:23.546]   Labels:  app=guestbook
I1206 16:16:23.546]            tier=frontend
I1206 16:16:23.546]   Containers:
I1206 16:16:23.546]    php-redis:
I1206 16:16:23.546]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1206 16:16:23.653] Namespace:    namespace-1544112981-7912
I1206 16:16:23.653] Selector:     app=guestbook,tier=frontend
I1206 16:16:23.653] Labels:       app=guestbook
I1206 16:16:23.653]               tier=frontend
I1206 16:16:23.653] Annotations:  <none>
I1206 16:16:23.653] Replicas:     3 current / 3 desired
I1206 16:16:23.654] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:23.654] Pod Template:
I1206 16:16:23.654]   Labels:  app=guestbook
I1206 16:16:23.654]            tier=frontend
I1206 16:16:23.654]   Containers:
I1206 16:16:23.654]    php-redis:
I1206 16:16:23.654]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 6 lines ...
I1206 16:16:23.655]       GET_HOSTS_FROM:  dns
I1206 16:16:23.655]     Mounts:            <none>
I1206 16:16:23.655]   Volumes:             <none>
I1206 16:16:23.655] 
W1206 16:16:23.756] I1206 16:16:21.188096   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-5b795689cd", UID:"44692a95-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2141", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-tb7gd
W1206 16:16:23.756] I1206 16:16:21.238246   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112967-8233", Name:"nginx-deployment-5b795689cd", UID:"44692a95-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2141", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-nws6w
W1206 16:16:23.756] E1206 16:16:21.384222   55476 replica_set.go:450] Sync "namespace-1544112967-8233/nginx-deployment-5766b7c95b" failed with replicasets.apps "nginx-deployment-5766b7c95b" not found
W1206 16:16:23.757] E1206 16:16:21.484300   55476 replica_set.go:450] Sync "namespace-1544112967-8233/nginx-deployment-65b869c68c" failed with replicasets.apps "nginx-deployment-65b869c68c" not found
W1206 16:16:23.757] E1206 16:16:21.684354   55476 replica_set.go:450] Sync "namespace-1544112967-8233/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
W1206 16:16:23.757] E1206 16:16:21.733905   55476 replica_set.go:450] Sync "namespace-1544112967-8233/nginx-deployment-5b795689cd" failed with replicasets.apps "nginx-deployment-5b795689cd" not found
W1206 16:16:23.757] E1206 16:16:21.834401   55476 replica_set.go:450] Sync "namespace-1544112967-8233/nginx-deployment-7b8f7659b7" failed with replicasets.apps "nginx-deployment-7b8f7659b7" not found
W1206 16:16:23.757] I1206 16:16:22.023039   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend", UID:"455d7450-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jrmzz
W1206 16:16:23.758] I1206 16:16:22.035069   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend", UID:"455d7450-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pb9cg
W1206 16:16:23.758] I1206 16:16:22.085864   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend", UID:"455d7450-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n4xsl
W1206 16:16:23.758] E1206 16:16:22.284587   55476 replica_set.go:450] Sync "namespace-1544112981-7912/frontend" failed with replicasets.apps "frontend" not found
W1206 16:16:23.758] I1206 16:16:22.417403   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend-no-cascade", UID:"459a1293-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2195", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-vmgsx
W1206 16:16:23.759] I1206 16:16:22.419807   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend-no-cascade", UID:"459a1293-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2195", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-cwknr
W1206 16:16:23.759] I1206 16:16:22.485537   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend-no-cascade", UID:"459a1293-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2195", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-wlt5r
W1206 16:16:23.759] E1206 16:16:22.734142   55476 replica_set.go:450] Sync "namespace-1544112981-7912/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W1206 16:16:23.759] I1206 16:16:23.212028   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend", UID:"4611b80b-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pzhr8
W1206 16:16:23.759] I1206 16:16:23.215755   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend", UID:"4611b80b-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zxwcf
W1206 16:16:23.760] I1206 16:16:23.215802   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend", UID:"4611b80b-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-k99wj
I1206 16:16:23.860] apps.sh:543: Successful describe
I1206 16:16:23.860] Name:         frontend
I1206 16:16:23.860] Namespace:    namespace-1544112981-7912
I1206 16:16:23.861] Selector:     app=guestbook,tier=frontend
I1206 16:16:23.861] Labels:       app=guestbook
I1206 16:16:23.861]               tier=frontend
I1206 16:16:23.861] Annotations:  <none>
I1206 16:16:23.861] Replicas:     3 current / 3 desired
I1206 16:16:23.861] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:23.861] Pod Template:
I1206 16:16:23.861]   Labels:  app=guestbook
I1206 16:16:23.861]            tier=frontend
I1206 16:16:23.861]   Containers:
I1206 16:16:23.861]    php-redis:
I1206 16:16:23.861]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1206 16:16:23.901] Namespace:    namespace-1544112981-7912
I1206 16:16:23.901] Selector:     app=guestbook,tier=frontend
I1206 16:16:23.901] Labels:       app=guestbook
I1206 16:16:23.902]               tier=frontend
I1206 16:16:23.902] Annotations:  <none>
I1206 16:16:23.902] Replicas:     3 current / 3 desired
I1206 16:16:23.902] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:23.902] Pod Template:
I1206 16:16:23.902]   Labels:  app=guestbook
I1206 16:16:23.902]            tier=frontend
I1206 16:16:23.902]   Containers:
I1206 16:16:23.902]    php-redis:
I1206 16:16:23.902]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1206 16:16:24.009] Namespace:    namespace-1544112981-7912
I1206 16:16:24.009] Selector:     app=guestbook,tier=frontend
I1206 16:16:24.009] Labels:       app=guestbook
I1206 16:16:24.009]               tier=frontend
I1206 16:16:24.009] Annotations:  <none>
I1206 16:16:24.009] Replicas:     3 current / 3 desired
I1206 16:16:24.009] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:24.009] Pod Template:
I1206 16:16:24.009]   Labels:  app=guestbook
I1206 16:16:24.009]            tier=frontend
I1206 16:16:24.009]   Containers:
I1206 16:16:24.009]    php-redis:
I1206 16:16:24.010]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1206 16:16:24.112] Namespace:    namespace-1544112981-7912
I1206 16:16:24.112] Selector:     app=guestbook,tier=frontend
I1206 16:16:24.112] Labels:       app=guestbook
I1206 16:16:24.112]               tier=frontend
I1206 16:16:24.112] Annotations:  <none>
I1206 16:16:24.113] Replicas:     3 current / 3 desired
I1206 16:16:24.113] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:24.113] Pod Template:
I1206 16:16:24.113]   Labels:  app=guestbook
I1206 16:16:24.113]            tier=frontend
I1206 16:16:24.113]   Containers:
I1206 16:16:24.113]    php-redis:
I1206 16:16:24.113]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1206 16:16:24.224] Namespace:    namespace-1544112981-7912
I1206 16:16:24.224] Selector:     app=guestbook,tier=frontend
I1206 16:16:24.224] Labels:       app=guestbook
I1206 16:16:24.224]               tier=frontend
I1206 16:16:24.224] Annotations:  <none>
I1206 16:16:24.224] Replicas:     3 current / 3 desired
I1206 16:16:24.224] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:24.224] Pod Template:
I1206 16:16:24.224]   Labels:  app=guestbook
I1206 16:16:24.224]            tier=frontend
I1206 16:16:24.224]   Containers:
I1206 16:16:24.224]    php-redis:
I1206 16:16:24.225]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I1206 16:16:29.312] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1206 16:16:29.405] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1206 16:16:29.481] horizontalpodautoscaler.autoscaling "frontend" deleted
W1206 16:16:29.582] I1206 16:16:28.876373   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend", UID:"497385db-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rxkqn
W1206 16:16:29.583] I1206 16:16:28.879296   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend", UID:"497385db-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f428b
W1206 16:16:29.583] I1206 16:16:28.879566   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544112981-7912", Name:"frontend", UID:"497385db-f972-11e8-9434-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-87l8t
W1206 16:16:29.583] Error: required flag(s) "max" not set
W1206 16:16:29.583] 
W1206 16:16:29.583] 
W1206 16:16:29.583] Examples:
W1206 16:16:29.583]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1206 16:16:29.584]   kubectl autoscale deployment foo --min=2 --max=10
W1206 16:16:29.584]   
... skipping 85 lines ...
I1206 16:16:32.439] apps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1206 16:16:32.526] apps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1206 16:16:32.630] statefulset.apps/nginx rolled back
I1206 16:16:32.722] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1206 16:16:32.809] apps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1206 16:16:32.910] Successful
I1206 16:16:32.910] message:error: unable to find specified revision 1000000 in history
I1206 16:16:32.910] has:unable to find specified revision
I1206 16:16:32.998] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1206 16:16:33.089] apps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1206 16:16:33.188] statefulset.apps/nginx rolled back
I1206 16:16:33.280] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I1206 16:16:33.370] apps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 61 lines ...
I1206 16:16:35.117] Name:         mock
I1206 16:16:35.117] Namespace:    namespace-1544112994-74
I1206 16:16:35.118] Selector:     app=mock
I1206 16:16:35.118] Labels:       app=mock
I1206 16:16:35.118] Annotations:  <none>
I1206 16:16:35.118] Replicas:     1 current / 1 desired
I1206 16:16:35.118] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:35.118] Pod Template:
I1206 16:16:35.118]   Labels:  app=mock
I1206 16:16:35.118]   Containers:
I1206 16:16:35.118]    mock-container:
I1206 16:16:35.118]     Image:        k8s.gcr.io/pause:2.0
I1206 16:16:35.118]     Port:         9949/TCP
... skipping 56 lines ...
I1206 16:16:37.234] Name:         mock
I1206 16:16:37.234] Namespace:    namespace-1544112994-74
I1206 16:16:37.234] Selector:     app=mock
I1206 16:16:37.234] Labels:       app=mock
I1206 16:16:37.234] Annotations:  <none>
I1206 16:16:37.234] Replicas:     1 current / 1 desired
I1206 16:16:37.235] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:37.235] Pod Template:
I1206 16:16:37.235]   Labels:  app=mock
I1206 16:16:37.235]   Containers:
I1206 16:16:37.235]    mock-container:
I1206 16:16:37.235]     Image:        k8s.gcr.io/pause:2.0
I1206 16:16:37.235]     Port:         9949/TCP
... skipping 56 lines ...
I1206 16:16:39.354] Name:         mock
I1206 16:16:39.354] Namespace:    namespace-1544112994-74
I1206 16:16:39.354] Selector:     app=mock
I1206 16:16:39.354] Labels:       app=mock
I1206 16:16:39.354] Annotations:  <none>
I1206 16:16:39.354] Replicas:     1 current / 1 desired
I1206 16:16:39.354] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:39.354] Pod Template:
I1206 16:16:39.354]   Labels:  app=mock
I1206 16:16:39.354]   Containers:
I1206 16:16:39.354]    mock-container:
I1206 16:16:39.354]     Image:        k8s.gcr.io/pause:2.0
I1206 16:16:39.354]     Port:         9949/TCP
... skipping 42 lines ...
I1206 16:16:41.427] Namespace:    namespace-1544112994-74
I1206 16:16:41.427] Selector:     app=mock
I1206 16:16:41.427] Labels:       app=mock
I1206 16:16:41.428]               status=replaced
I1206 16:16:41.428] Annotations:  <none>
I1206 16:16:41.428] Replicas:     1 current / 1 desired
I1206 16:16:41.428] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:41.428] Pod Template:
I1206 16:16:41.428]   Labels:  app=mock
I1206 16:16:41.428]   Containers:
I1206 16:16:41.428]    mock-container:
I1206 16:16:41.428]     Image:        k8s.gcr.io/pause:2.0
I1206 16:16:41.429]     Port:         9949/TCP
... skipping 11 lines ...
I1206 16:16:41.430] Namespace:    namespace-1544112994-74
I1206 16:16:41.430] Selector:     app=mock2
I1206 16:16:41.430] Labels:       app=mock2
I1206 16:16:41.430]               status=replaced
I1206 16:16:41.430] Annotations:  <none>
I1206 16:16:41.431] Replicas:     1 current / 1 desired
I1206 16:16:41.431] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 16:16:41.431] Pod Template:
I1206 16:16:41.431]   Labels:  app=mock2
I1206 16:16:41.431]   Containers:
I1206 16:16:41.431]    mock-container:
I1206 16:16:41.431]     Image:        k8s.gcr.io/pause:2.0
I1206 16:16:41.431]     Port:         9949/TCP
... skipping 110 lines ...
I1206 16:16:46.503] persistentvolume/pv0001 created
I1206 16:16:46.603] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I1206 16:16:46.682] persistentvolume "pv0001" deleted
I1206 16:16:46.846] persistentvolume/pv0002 created
I1206 16:16:46.951] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I1206 16:16:47.032] persistentvolume "pv0002" deleted
W1206 16:16:47.133] E1206 16:16:46.849098   55476 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
W1206 16:16:47.197] E1206 16:16:47.197093   55476 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I1206 16:16:47.298] persistentvolume/pv0003 created
I1206 16:16:47.298] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I1206 16:16:47.379] persistentvolume "pv0003" deleted
I1206 16:16:47.480] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 16:16:47.494] +++ exit code: 0
I1206 16:16:47.531] Recording: run_persistent_volume_claims_tests
... skipping 475 lines ...
I1206 16:16:51.876] yes
I1206 16:16:51.876] has:the server doesn't have a resource type
I1206 16:16:51.947] Successful
I1206 16:16:51.947] message:yes
I1206 16:16:51.947] has:yes
I1206 16:16:52.019] Successful
I1206 16:16:52.019] message:error: --subresource can not be used with NonResourceURL
I1206 16:16:52.019] has:subresource can not be used with NonResourceURL
I1206 16:16:52.100] Successful
I1206 16:16:52.182] Successful
I1206 16:16:52.182] message:yes
I1206 16:16:52.182] 0
I1206 16:16:52.182] has:0
... skipping 6 lines ...
I1206 16:16:52.378] role.rbac.authorization.k8s.io/testing-R reconciled
I1206 16:16:52.472] legacy-script.sh:736: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I1206 16:16:52.559] legacy-script.sh:737: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I1206 16:16:52.645] legacy-script.sh:738: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I1206 16:16:52.736] legacy-script.sh:739: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I1206 16:16:52.817] Successful
I1206 16:16:52.817] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I1206 16:16:52.817] has:only rbac.authorization.k8s.io/v1 is supported
I1206 16:16:52.904] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I1206 16:16:52.911] role.rbac.authorization.k8s.io "testing-R" deleted
I1206 16:16:52.919] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I1206 16:16:52.928] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I1206 16:16:52.938] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I1206 16:16:54.020] +++ Running case: test-cmd.run_kubectl_explain_tests 
I1206 16:16:54.023] +++ working dir: /go/src/k8s.io/kubernetes
I1206 16:16:54.025] +++ command: run_kubectl_explain_tests
I1206 16:16:54.035] +++ [1206 16:16:54] Testing kubectl(v1:explain)
W1206 16:16:54.135] I1206 16:16:53.904626   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544113013-20909", Name:"cassandra", UID:"5824da6b-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"2754", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-7lxvc
W1206 16:16:54.136] I1206 16:16:53.912508   55476 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544113013-20909", Name:"cassandra", UID:"5824da6b-f972-11e8-9434-0242ac110002", APIVersion:"v1", ResourceVersion:"2761", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-79tnm
W1206 16:16:54.136] E1206 16:16:53.918426   55476 replica_set.go:450] Sync "namespace-1544113013-20909/cassandra" failed with replicationcontrollers "cassandra" not found
I1206 16:16:54.236] KIND:     Pod
I1206 16:16:54.237] VERSION:  v1
I1206 16:16:54.237] 
I1206 16:16:54.237] DESCRIPTION:
I1206 16:16:54.237]      Pod is a collection of containers that can run on a host. This resource is
I1206 16:16:54.237]      created by clients and scheduled onto hosts.
... skipping 849 lines ...
I1206 16:17:19.430] message:node/127.0.0.1 already uncordoned (dry run)
I1206 16:17:19.430] has:already uncordoned
I1206 16:17:19.514] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I1206 16:17:19.589] node/127.0.0.1 labeled
I1206 16:17:19.679] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I1206 16:17:19.749] Successful
I1206 16:17:19.750] message:error: cannot specify both a node name and a --selector option
I1206 16:17:19.750] See 'kubectl drain -h' for help and examples
I1206 16:17:19.750] has:cannot specify both a node name
I1206 16:17:19.815] Successful
I1206 16:17:19.815] message:error: USAGE: cordon NODE [flags]
I1206 16:17:19.815] See 'kubectl cordon -h' for help and examples
I1206 16:17:19.815] has:error\: USAGE\: cordon NODE
I1206 16:17:19.887] node/127.0.0.1 already uncordoned
I1206 16:17:19.961] Successful
I1206 16:17:19.962] message:error: You must provide one or more resources by argument or filename.
I1206 16:17:19.962] Example resource specifications include:
I1206 16:17:19.962]    '-f rsrc.yaml'
I1206 16:17:19.962]    '--filename=rsrc.json'
I1206 16:17:19.962]    '<resource> <name>'
I1206 16:17:19.962]    '<resource>'
I1206 16:17:19.962] has:must provide one or more resources
... skipping 15 lines ...
I1206 16:17:20.380] Successful
I1206 16:17:20.380] message:The following kubectl-compatible plugins are available:
I1206 16:17:20.380] 
I1206 16:17:20.380] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I1206 16:17:20.380]   - warning: kubectl-version overwrites existing command: "kubectl version"
I1206 16:17:20.380] 
I1206 16:17:20.380] error: one plugin warning was found
I1206 16:17:20.380] has:kubectl-version overwrites existing command: "kubectl version"
I1206 16:17:20.453] Successful
I1206 16:17:20.453] message:The following kubectl-compatible plugins are available:
I1206 16:17:20.453] 
I1206 16:17:20.454] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1206 16:17:20.454] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I1206 16:17:20.454]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1206 16:17:20.454] 
I1206 16:17:20.454] error: one plugin warning was found
I1206 16:17:20.454] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I1206 16:17:20.526] Successful
I1206 16:17:20.526] message:The following kubectl-compatible plugins are available:
I1206 16:17:20.526] 
I1206 16:17:20.526] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1206 16:17:20.526] has:plugins are available
I1206 16:17:20.599] Successful
I1206 16:17:20.599] message:
I1206 16:17:20.599] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I1206 16:17:20.599] error: unable to find any kubectl plugins in your PATH
I1206 16:17:20.600] has:unable to find any kubectl plugins in your PATH
I1206 16:17:20.667] Successful
I1206 16:17:20.668] message:I am plugin foo
I1206 16:17:20.668] has:plugin foo
I1206 16:17:20.737] Successful
I1206 16:17:20.738] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.883+0351853ea1ae78", GitCommit:"0351853ea1ae783ffe5db3cd6c1fef72bf5e57ec", GitTreeState:"clean", BuildDate:"2018-12-06T16:10:48Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I1206 16:17:20.813] 
I1206 16:17:20.815] +++ Running case: test-cmd.run_impersonation_tests 
I1206 16:17:20.818] +++ working dir: /go/src/k8s.io/kubernetes
I1206 16:17:20.820] +++ command: run_impersonation_tests
I1206 16:17:20.830] +++ [1206 16:17:20] Testing impersonation
I1206 16:17:20.901] Successful
I1206 16:17:20.901] message:error: requesting groups or user-extra for  without impersonating a user
I1206 16:17:20.901] has:without impersonating a user
I1206 16:17:21.055] certificatesigningrequest.certificates.k8s.io/foo created
I1206 16:17:21.150] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I1206 16:17:21.237] authorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I1206 16:17:21.322] certificatesigningrequest.certificates.k8s.io "foo" deleted
I1206 16:17:21.478] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 21 lines ...
W1206 16:17:21.984] I1206 16:17:21.980724   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.984] I1206 16:17:21.980867   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.985] I1206 16:17:21.980891   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.985] I1206 16:17:21.980870   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.985] I1206 16:17:21.980910   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.985] I1206 16:17:21.980929   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.985] W1206 16:17:21.980939   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.986] I1206 16:17:21.980946   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.986] I1206 16:17:21.980958   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.986] I1206 16:17:21.980976   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.986] I1206 16:17:21.980980   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.986] I1206 16:17:21.980979   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.986] I1206 16:17:21.981001   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.986] W1206 16:17:21.981019   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.986] I1206 16:17:21.980948   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.987] I1206 16:17:21.981348   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.987] I1206 16:17:21.981362   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.987] I1206 16:17:21.981429   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.987] I1206 16:17:21.981441   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.987] I1206 16:17:21.981465   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 4 lines ...
W1206 16:17:21.988] I1206 16:17:21.981559   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.988] I1206 16:17:21.981567   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.988] I1206 16:17:21.981578   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.988] I1206 16:17:21.981588   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.988] I1206 16:17:21.981602   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.988] I1206 16:17:21.981611   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.989] W1206 16:17:21.981615   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.989] I1206 16:17:21.981620   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.989] I1206 16:17:21.981695   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.989] I1206 16:17:21.981717   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.989] I1206 16:17:21.981742   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.989] I1206 16:17:21.981751   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.990] I1206 16:17:21.981759   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 17 lines ...
W1206 16:17:21.993] I1206 16:17:21.982094   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.993] I1206 16:17:21.982106   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.993] I1206 16:17:21.982139   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.993] I1206 16:17:21.982149   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.993] I1206 16:17:21.982096   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.993] I1206 16:17:21.982167   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.994] W1206 16:17:21.982178   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.994] I1206 16:17:21.982190   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.994] I1206 16:17:21.982199   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.994] W1206 16:17:21.982220   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.994] W1206 16:17:21.982220   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.995] W1206 16:17:21.982254   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.995] W1206 16:17:21.982296   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.995] I1206 16:17:21.982291   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.995] W1206 16:17:21.982343   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.995] I1206 16:17:21.982394   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.995] W1206 16:17:21.982394   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.996] W1206 16:17:21.982447   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.996] W1206 16:17:21.982458   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.996] W1206 16:17:21.982497   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.996] I1206 16:17:21.982508   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.996] I1206 16:17:21.982528   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.997] W1206 16:17:21.982610   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.997] W1206 16:17:21.982744   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.997] W1206 16:17:21.982753   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.997] W1206 16:17:21.982856   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.997] W1206 16:17:21.982936   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.997] I1206 16:17:21.982973   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.998] I1206 16:17:21.982986   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.998] I1206 16:17:21.983020   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.998] I1206 16:17:21.983029   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.998] I1206 16:17:21.983029   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.998] I1206 16:17:21.983040   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.999] I1206 16:17:21.983065   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.999] I1206 16:17:21.983070   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.999] W1206 16:17:21.983086   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:21.999] I1206 16:17:21.983095   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.999] I1206 16:17:21.983101   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:21.999] I1206 16:17:21.983126   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.000] W1206 16:17:21.983131   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.000] I1206 16:17:21.983155   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.000] I1206 16:17:21.983163   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.000] W1206 16:17:21.983331   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.000] W1206 16:17:21.983402   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.001] I1206 16:17:21.983132   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.001] I1206 16:17:21.983476   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.001] I1206 16:17:21.983488   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.001] I1206 16:17:21.983511   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.001] I1206 16:17:21.983526   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.001] W1206 16:17:21.983548   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.001] I1206 16:17:21.983619   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.001] I1206 16:17:21.983634   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.002] I1206 16:17:21.983689   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.002] I1206 16:17:21.983698   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.002] I1206 16:17:21.983698   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.002] I1206 16:17:21.983718   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.002] I1206 16:17:21.983744   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.002] I1206 16:17:21.983749   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.002] I1206 16:17:21.983549   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.002] I1206 16:17:21.983761   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.003] W1206 16:17:21.983760   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.003] I1206 16:17:21.983790   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.003] I1206 16:17:21.983790   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.003] I1206 16:17:21.983796   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.004] I1206 16:17:21.983804   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.004] I1206 16:17:21.983847   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.004] I1206 16:17:21.983856   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.004] I1206 16:17:21.983892   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.004] I1206 16:17:21.983901   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.004] W1206 16:17:21.983932   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.005] I1206 16:17:21.983967   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.005] I1206 16:17:21.983933   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.005] I1206 16:17:21.983995   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.005] I1206 16:17:21.983987   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.005] W1206 16:17:21.984087   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.005] I1206 16:17:21.984099   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.006] W1206 16:17:21.984104   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.006] I1206 16:17:21.984110   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.006] I1206 16:17:21.984147   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.006] I1206 16:17:21.984147   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.006] I1206 16:17:21.984156   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.007] I1206 16:17:21.984157   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.007] W1206 16:17:21.983971   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.007] I1206 16:17:21.984199   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.007] W1206 16:17:21.984208   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.008] I1206 16:17:21.984225   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.008] I1206 16:17:21.984225   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.008] I1206 16:17:21.984199   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.008] I1206 16:17:21.984242   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.008] I1206 16:17:21.984250   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.008] I1206 16:17:21.984266   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.009] I1206 16:17:21.984274   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.009] W1206 16:17:21.984302   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.009] I1206 16:17:21.984308   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.009] I1206 16:17:21.984315   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.009] I1206 16:17:21.984461   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.009] I1206 16:17:21.984475   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.009] I1206 16:17:21.984533   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.010] I1206 16:17:21.984545   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.010] W1206 16:17:21.984590   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.010] W1206 16:17:21.984648   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.010] I1206 16:17:21.984658   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.010] I1206 16:17:21.984688   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.010] I1206 16:17:21.984696   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.011] I1206 16:17:21.984707   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.011] I1206 16:17:21.984747   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.011] I1206 16:17:21.984753   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.011] I1206 16:17:21.984790   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.011] I1206 16:17:21.984802   52130 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 16:17:22.011] W1206 16:17:21.984812   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.011] W1206 16:17:21.984875   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.012] E1206 16:17:21.984888   52130 controller.go:172] Get https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:6443: connect: connection refused
W1206 16:17:22.012] W1206 16:17:21.984920   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.012] W1206 16:17:21.984926   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.012] W1206 16:17:21.984929   52130 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 16:17:22.031] + make test-integration
I1206 16:17:22.132] No resources found
I1206 16:17:22.132] pod "test-pod-1" force deleted
I1206 16:17:22.132] +++ [1206 16:17:21] TESTS PASSED
I1206 16:17:22.132] junit report dir: /workspace/artifacts
I1206 16:17:22.133] +++ [1206 16:17:22] Clean up complete
... skipping 220 lines ...
I1206 16:30:39.672] ok  	k8s.io/kubernetes/test/integration/master	342.121s
I1206 16:30:39.672] ok  	k8s.io/kubernetes/test/integration/metrics	8.242s
I1206 16:30:39.672] ok  	k8s.io/kubernetes/test/integration/objectmeta	4.715s
I1206 16:30:39.672] ok  	k8s.io/kubernetes/test/integration/openshift	0.912s
I1206 16:30:39.672] ok  	k8s.io/kubernetes/test/integration/pods	11.910s
I1206 16:30:39.673] ok  	k8s.io/kubernetes/test/integration/quota	8.546s
I1206 16:30:39.673] FAIL	k8s.io/kubernetes/test/integration/replicaset	52.045s
I1206 16:30:39.673] ok  	k8s.io/kubernetes/test/integration/replicationcontroller	55.592s
I1206 16:30:39.673] [restful] 2018/12/06 16:21:19 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:39771/swaggerapi
I1206 16:30:39.673] [restful] 2018/12/06 16:21:19 log.go:33: [restful/swagger] https://127.0.0.1:39771/swaggerui/ is mapped to folder /swagger-ui/
I1206 16:30:39.673] [restful] 2018/12/06 16:21:21 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:39771/swaggerapi
I1206 16:30:39.673] [restful] 2018/12/06 16:21:21 log.go:33: [restful/swagger] https://127.0.0.1:39771/swaggerui/ is mapped to folder /swagger-ui/
I1206 16:30:39.673] ok  	k8s.io/kubernetes/test/integration/scale	11.650s
... skipping 14 lines ...
I1206 16:30:39.675] [restful] 2018/12/06 16:23:03 log.go:33: [restful/swagger] https://127.0.0.1:45893/swaggerui/ is mapped to folder /swagger-ui/
I1206 16:30:39.675] ok  	k8s.io/kubernetes/test/integration/tls	13.635s
I1206 16:30:39.675] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.142s
I1206 16:30:39.675] ok  	k8s.io/kubernetes/test/integration/volume	91.843s
I1206 16:30:39.675] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	147.687s
I1206 16:30:41.079] +++ [1206 16:30:41] Saved JUnit XML test report to /workspace/artifacts/junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181206-161731.xml
I1206 16:30:41.083] Makefile:184: recipe for target 'test' failed
I1206 16:30:41.093] +++ [1206 16:30:41] Cleaning up etcd
W1206 16:30:41.194] make[1]: *** [test] Error 1
W1206 16:30:41.194] !!! [1206 16:30:41] Call tree:
W1206 16:30:41.194] !!! [1206 16:30:41]  1: hack/make-rules/test-integration.sh:105 runTests(...)
W1206 16:30:41.276] make: *** [test-integration] Error 1
I1206 16:30:41.376] +++ [1206 16:30:41] Integration test cleanup complete
I1206 16:30:41.376] Makefile:203: recipe for target 'test-integration' failed
W1206 16:30:42.353] Traceback (most recent call last):
W1206 16:30:42.354]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 167, in <module>
W1206 16:30:42.354]     main(ARGS.branch, ARGS.script, ARGS.force, ARGS.prow)
W1206 16:30:42.354]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 136, in main
W1206 16:30:42.354]     check(*cmd)
W1206 16:30:42.354]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W1206 16:30:42.354]     subprocess.check_call(cmd)
W1206 16:30:42.354]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1206 16:30:42.424]     raise CalledProcessError(retcode, cmd)
W1206 16:30:42.425] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181105-ceed87206', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E1206 16:30:42.432] Command failed
I1206 16:30:42.432] process 495 exited with code 1 after 25.7m
E1206 16:30:42.432] FAIL: ci-kubernetes-integration-master
I1206 16:30:42.432] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1206 16:30:42.905] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1206 16:30:42.958] process 123605 exited with code 0 after 0.0m
I1206 16:30:42.958] Call:  gcloud config get-value account
I1206 16:30:43.229] process 123618 exited with code 0 after 0.0m
I1206 16:30:43.229] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1206 16:30:43.229] Upload result and artifacts...
I1206 16:30:43.229] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/7159
I1206 16:30:43.230] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7159/artifacts
W1206 16:30:45.035] CommandException: One or more URLs matched no objects.
E1206 16:30:45.259] Command failed
I1206 16:30:45.259] process 123631 exited with code 1 after 0.0m
W1206 16:30:45.260] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7159/artifacts not exist yet
I1206 16:30:45.260] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7159/artifacts
I1206 16:30:49.493] process 123776 exited with code 0 after 0.1m
W1206 16:30:49.494] metadata path /workspace/_artifacts/metadata.json does not exist
W1206 16:30:49.494] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...