ResultFAILURE
Tests 1 failed / 578 succeeded
Started2018-12-06 17:05
Elapsed25m50s
Versionv1.14.0-alpha.0.885+b5615259e5b1b4
Buildergke-prow-default-pool-3c8994a8-kv0v
podf05723a8-f978-11e8-b720-0a580a6c02d1
infra-commitea22e5d80
podf05723a8-f978-11e8-b720-0a580a6c02d1
repok8s.io/kubernetes
repo-commitb5615259e5b1b4548d863f3140aadb58c85c6865
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/auth TestAuthModeAlwaysAllow 3.55s

go test -v k8s.io/kubernetes/test/integration/auth -run TestAuthModeAlwaysAllow$
I1206 17:19:36.958770  117302 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I1206 17:19:36.958795  117302 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I1206 17:19:36.958803  117302 master.go:228] Using reconciler: 
I1206 17:19:36.960037  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.960054  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.960083  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.960117  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.960475  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.960928  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.960947  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.960973  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.961016  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.961438  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.961471  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.961531  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.961564  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.961610  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.962050  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.962069  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.962090  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.962142  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.962284  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.963062  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.963094  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.963154  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.963256  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.963465  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.964033  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.964338  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.964406  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.964452  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.964618  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.965202  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.965298  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.965354  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.965428  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.965486  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.966193  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.966213  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.966242  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.966296  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.966428  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.967153  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.967528  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.967541  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.967564  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.967760  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.968209  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.968220  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.968239  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.968292  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.968409  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.968998  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.969014  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.969041  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.969107  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.969256  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.970004  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.970015  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.970035  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.970085  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.970183  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.971191  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.971609  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.971620  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.971640  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.971808  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.972371  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.972402  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.972425  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.972476  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.972579  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.972934  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.972948  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.972978  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.973038  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.973154  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.974142  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.974196  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.974204  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.974249  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.974299  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.974622  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.974755  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.974767  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.974788  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.974811  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.975116  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.985749  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.985773  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.985806  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.985851  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.986169  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.986440  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.986473  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.986502  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.986882  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.987412  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.987516  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.987537  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.987564  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.987710  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.988248  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.988262  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.988289  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.988383  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.988633  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.989158  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.989193  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.989220  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.989284  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.989484  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.989984  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.990126  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.990143  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.990205  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.990268  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.990944  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.991135  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.991169  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.991214  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.991256  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.991731  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.991748  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.991769  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.991840  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.991975  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.992282  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.992554  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.992571  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.992590  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.992734  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.992982  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.993159  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.993177  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.993197  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.993256  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.993755  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.993771  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.993792  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.993841  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.994002  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.995369  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.995788  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.995845  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.995874  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.995941  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.996336  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.996842  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.996928  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.996439  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.997068  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.997655  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.997699  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.997735  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.997953  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.998052  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.998534  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.998557  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.998578  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.998582  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.998852  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.999381  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:36.999414  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:36.999444  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:36.999544  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:36.999659  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.000174  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.000289  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.001082  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.001142  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.001184  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.001614  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.001810  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.001845  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.001987  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.002141  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.002629  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.003230  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.003284  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.003327  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.003423  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.004035  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.004070  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.004111  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.004210  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.004172  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.004962  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.005139  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.005163  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.005213  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.005274  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.005650  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.005800  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.005877  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.005776  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.005942  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.006252  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.006776  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.006800  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.006857  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.006921  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.007652  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.007749  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.007785  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.007817  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.008225  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.008525  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.008841  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.008871  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.008912  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.008945  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.009376  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.011305  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.011330  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.011351  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.011406  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.011659  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.012011  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.012027  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.012050  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.012095  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.012706  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.012746  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.012762  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.012781  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.012847  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.012992  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.013583  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.013600  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.013624  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.013755  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.014355  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.014628  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.014645  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.014694  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.014850  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.015278  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.015300  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.015331  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.015768  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.016097  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.016207  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.016223  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.016246  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.016330  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.016629  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.016823  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.017011  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.017036  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.017075  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.017144  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.017566  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.017755  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.017767  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.017789  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.017884  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.018224  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.018300  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.018320  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.018353  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.018431  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.018579  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.018861  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.018876  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.018895  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.018944  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.019876  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.020109  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.020138  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.020220  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.020291  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.020727  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.021104  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.021128  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.021160  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.021453  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.021915  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.022290  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.022322  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.022394  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.022728  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.023145  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.023425  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.023449  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.023486  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.023535  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.023932  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.024125  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.024153  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.024192  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.024259  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.024836  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.024878  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.024916  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.024980  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.025164  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.025774  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.026330  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.026346  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.026395  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.026444  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.036989  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.037122  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.037140  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.037161  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.037262  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.037649  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.037679  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.037699  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.037784  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.037926  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.038502  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.038520  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.038542  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.038614  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.038770  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.039337  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:19:37.045449  117302 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W1206 17:19:37.059413  117302 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1206 17:19:37.060053  117302 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1206 17:19:37.062318  117302 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1206 17:19:37.076412  117302 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I1206 17:19:37.958658  117302 clientconn.go:551] parsed scheme: ""
I1206 17:19:37.958711  117302 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1206 17:19:37.958761  117302 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1206 17:19:37.958828  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:37.959359  117302 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1206 17:19:38.085379  117302 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I1206 17:19:38.090636  117302 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I1206 17:19:38.090658  117302 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I1206 17:19:38.097144  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1206 17:19:38.099870  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1206 17:19:38.106003  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1206 17:19:38.108419  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I1206 17:19:38.110818  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I1206 17:19:38.113565  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I1206 17:19:38.116048  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1206 17:19:38.118701  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1206 17:19:38.121504  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1206 17:19:38.124104  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1206 17:19:38.133561  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I1206 17:19:38.136129  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1206 17:19:38.138841  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1206 17:19:38.142435  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1206 17:19:38.145017  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1206 17:19:38.147440  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1206 17:19:38.149914  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1206 17:19:38.152458  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1206 17:19:38.155110  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1206 17:19:38.158398  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1206 17:19:38.161015  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1206 17:19:38.163587  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1206 17:19:38.165974  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I1206 17:19:38.168416  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1206 17:19:38.170945  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1206 17:19:38.173554  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1206 17:19:38.176211  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1206 17:19:38.179683  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1206 17:19:38.182216  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1206 17:19:38.185609  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1206 17:19:38.188093  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1206 17:19:38.190551  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1206 17:19:38.196078  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1206 17:19:38.198544  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1206 17:19:38.200941  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1206 17:19:38.203349  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1206 17:19:38.205800  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1206 17:19:38.208419  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1206 17:19:38.210949  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1206 17:19:38.215522  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1206 17:19:38.218187  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1206 17:19:38.220441  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1206 17:19:38.222910  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1206 17:19:38.225284  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1206 17:19:38.228165  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1206 17:19:38.230445  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1206 17:19:38.232867  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1206 17:19:38.235356  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1206 17:19:38.238127  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1206 17:19:38.240450  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1206 17:19:38.281545  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1206 17:19:38.321596  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1206 17:19:38.361538  117302 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1206 17:19:38.401575  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1206 17:19:38.441770  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1206 17:19:38.481267  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1206 17:19:38.521338  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1206 17:19:38.561312  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1206 17:19:38.601511  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1206 17:19:38.641643  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1206 17:19:38.681354  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I1206 17:19:38.722227  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1206 17:19:38.761380  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1206 17:19:38.802611  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1206 17:19:38.841316  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1206 17:19:38.881626  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1206 17:19:38.921092  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1206 17:19:38.961173  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1206 17:19:39.001317  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1206 17:19:39.041335  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1206 17:19:39.081421  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1206 17:19:39.124570  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1206 17:19:39.161937  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1206 17:19:39.201361  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1206 17:19:39.241296  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1206 17:19:39.281374  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1206 17:19:39.321522  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1206 17:19:39.361336  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1206 17:19:39.401521  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1206 17:19:39.441582  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1206 17:19:39.481101  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1206 17:19:39.521475  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1206 17:19:39.561565  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1206 17:19:39.606933  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1206 17:19:39.641271  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1206 17:19:39.681462  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1206 17:19:39.721017  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1206 17:19:39.793272  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1206 17:19:39.800942  117302 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1206 17:19:39.841418  117302 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1206 17:19:39.881055  117302 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1206 17:19:39.920891  117302 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1206 17:19:39.961325  117302 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1206 17:19:40.001111  117302 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1206 17:19:40.041041  117302 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1206 17:19:40.081854  117302 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1206 17:19:40.121130  117302 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1206 17:19:40.161045  117302 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1206 17:19:40.201227  117302 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1206 17:19:40.241088  117302 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1206 17:19:40.281054  117302 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1206 17:19:40.321044  117302 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1206 17:19:40.502058  117302 controller.go:170] Shutting down kubernetes service endpoint reconciler
				from junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181206-171804.xml

Filter through log files | View test history on testgrid


Show 578 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1206 17:05:11.432] process 212 exited with code 0 after 0.1m
I1206 17:05:11.432] Call:  gcloud config get-value account
I1206 17:05:11.880] process 225 exited with code 0 after 0.0m
I1206 17:05:11.880] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1206 17:05:11.881] Call:  kubectl get -oyaml pods/f05723a8-f978-11e8-b720-0a580a6c02d1
W1206 17:05:13.411] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1206 17:05:13.414] Command failed
I1206 17:05:13.415] process 238 exited with code 1 after 0.0m
E1206 17:05:13.415] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/f05723a8-f978-11e8-b720-0a580a6c02d1']' returned non-zero exit status 1
I1206 17:05:13.415] Root: /workspace
I1206 17:05:13.415] cd to /workspace
I1206 17:05:13.415] Checkout: /workspace/k8s.io/kubernetes master to /workspace/k8s.io/kubernetes
I1206 17:05:13.416] Call:  git init k8s.io/kubernetes
... skipping 795 lines ...
W1206 17:13:20.882] I1206 17:13:20.881489   55659 controllermanager.go:516] Started "nodelifecycle"
W1206 17:13:20.882] I1206 17:13:20.881708   55659 node_lifecycle_controller.go:423] Starting node controller
W1206 17:13:20.883] I1206 17:13:20.881733   55659 controller_utils.go:1027] Waiting for caches to sync for taint controller
W1206 17:13:20.883] I1206 17:13:20.882099   55659 controllermanager.go:516] Started "deployment"
W1206 17:13:20.883] I1206 17:13:20.882633   55659 deployment_controller.go:152] Starting deployment controller
W1206 17:13:20.883] I1206 17:13:20.882657   55659 controller_utils.go:1027] Waiting for caches to sync for deployment controller
W1206 17:13:20.883] E1206 17:13:20.882633   55659 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1206 17:13:20.884] W1206 17:13:20.882756   55659 controllermanager.go:508] Skipping "service"
W1206 17:13:20.884] W1206 17:13:20.882771   55659 controllermanager.go:508] Skipping "nodeipam"
W1206 17:13:20.884] I1206 17:13:20.883159   55659 controllermanager.go:516] Started "clusterrole-aggregation"
W1206 17:13:20.884] I1206 17:13:20.883505   55659 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W1206 17:13:20.884] I1206 17:13:20.883528   55659 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
W1206 17:13:20.885] I1206 17:13:20.883614   55659 controllermanager.go:516] Started "pvc-protection"
... skipping 23 lines ...
W1206 17:13:20.899] I1206 17:13:20.896392   55659 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W1206 17:13:20.899] I1206 17:13:20.896412   55659 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
W1206 17:13:20.899] I1206 17:13:20.896442   55659 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W1206 17:13:20.900] I1206 17:13:20.896570   55659 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
W1206 17:13:20.900] I1206 17:13:20.896637   55659 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
W1206 17:13:20.900] I1206 17:13:20.896686   55659 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
W1206 17:13:20.900] E1206 17:13:20.896708   55659 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1206 17:13:20.900] I1206 17:13:20.896751   55659 controllermanager.go:516] Started "resourcequota"
W1206 17:13:20.901] I1206 17:13:20.896781   55659 resource_quota_controller.go:276] Starting resource quota controller
W1206 17:13:20.901] I1206 17:13:20.896799   55659 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W1206 17:13:20.901] I1206 17:13:20.896833   55659 resource_quota_monitor.go:301] QuotaMonitor running
W1206 17:13:20.901] I1206 17:13:20.897444   55659 controllermanager.go:516] Started "disruption"
W1206 17:13:20.901] I1206 17:13:20.897916   55659 disruption.go:288] Starting disruption controller
... skipping 29 lines ...
W1206 17:13:20.908] I1206 17:13:20.908276   55659 serviceaccounts_controller.go:115] Starting service account controller
W1206 17:13:20.909] I1206 17:13:20.908287   55659 controller_utils.go:1027] Waiting for caches to sync for service account controller
W1206 17:13:20.909] W1206 17:13:20.908295   55659 controllermanager.go:508] Skipping "csrsigning"
W1206 17:13:20.919] I1206 17:13:20.918635   55659 controllermanager.go:516] Started "namespace"
W1206 17:13:20.919] I1206 17:13:20.918739   55659 namespace_controller.go:186] Starting namespace controller
W1206 17:13:20.919] I1206 17:13:20.918819   55659 controller_utils.go:1027] Waiting for caches to sync for namespace controller
W1206 17:13:20.919] W1206 17:13:20.919458   55659 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W1206 17:13:20.920] I1206 17:13:20.920543   55659 garbagecollector.go:133] Starting garbage collector controller
W1206 17:13:20.921] I1206 17:13:20.920568   55659 controllermanager.go:516] Started "garbagecollector"
W1206 17:13:20.921] I1206 17:13:20.920583   55659 graph_builder.go:308] GraphBuilder running
W1206 17:13:20.921] I1206 17:13:20.920570   55659 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1206 17:13:20.922] I1206 17:13:20.922321   55659 controllermanager.go:516] Started "ttl"
W1206 17:13:20.922] I1206 17:13:20.922355   55659 ttl_controller.go:116] Starting TTL controller
... skipping 15 lines ...
W1206 17:13:20.926] I1206 17:13:20.926616   55659 horizontal.go:156] Starting HPA controller
W1206 17:13:20.927] I1206 17:13:20.926815   55659 controller_utils.go:1027] Waiting for caches to sync for HPA controller
W1206 17:13:20.982] I1206 17:13:20.981921   55659 controller_utils.go:1034] Caches are synced for taint controller
W1206 17:13:20.982] I1206 17:13:20.981991   55659 taint_manager.go:198] Starting NoExecuteTaintManager
W1206 17:13:20.984] I1206 17:13:20.983742   55659 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W1206 17:13:20.985] I1206 17:13:20.984813   55659 controller_utils.go:1034] Caches are synced for daemon sets controller
W1206 17:13:20.992] E1206 17:13:20.991760   55659 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1206 17:13:20.996] E1206 17:13:20.996296   55659 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1206 17:13:20.999] I1206 17:13:20.998841   55659 controller_utils.go:1034] Caches are synced for disruption controller
W1206 17:13:20.999] I1206 17:13:20.998862   55659 disruption.go:296] Sending events to api server.
W1206 17:13:21.000] I1206 17:13:21.000195   55659 controller_utils.go:1034] Caches are synced for endpoint controller
W1206 17:13:21.001] E1206 17:13:21.001181   55659 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1206 17:13:21.002] I1206 17:13:21.001631   55659 controller_utils.go:1034] Caches are synced for ReplicationController controller
W1206 17:13:21.003] I1206 17:13:21.002786   55659 controller_utils.go:1034] Caches are synced for job controller
W1206 17:13:21.005] I1206 17:13:21.004852   55659 controller_utils.go:1034] Caches are synced for certificate controller
W1206 17:13:21.005] I1206 17:13:21.005304   55659 controller_utils.go:1034] Caches are synced for GC controller
W1206 17:13:21.008] I1206 17:13:21.008539   55659 controller_utils.go:1034] Caches are synced for service account controller
W1206 17:13:21.010] I1206 17:13:21.010603   52307 controller.go:608] quota admission added evaluator for: serviceaccounts
W1206 17:13:21.019] I1206 17:13:21.019344   55659 controller_utils.go:1034] Caches are synced for namespace controller
W1206 17:13:21.022] I1206 17:13:21.022559   55659 controller_utils.go:1034] Caches are synced for TTL controller
W1206 17:13:21.025] I1206 17:13:21.025057   55659 controller_utils.go:1034] Caches are synced for PV protection controller
W1206 17:13:21.026] I1206 17:13:21.026092   55659 controller_utils.go:1034] Caches are synced for ReplicaSet controller
W1206 17:13:21.027] I1206 17:13:21.027072   55659 controller_utils.go:1034] Caches are synced for HPA controller
W1206 17:13:21.097] W1206 17:13:21.096418   55659 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1206 17:13:21.184] I1206 17:13:21.184143   55659 controller_utils.go:1034] Caches are synced for PVC protection controller
W1206 17:13:21.200] I1206 17:13:21.199716   55659 controller_utils.go:1034] Caches are synced for attach detach controller
W1206 17:13:21.202] I1206 17:13:21.202295   55659 controller_utils.go:1034] Caches are synced for expand controller
W1206 17:13:21.225] I1206 17:13:21.225033   55659 controller_utils.go:1034] Caches are synced for stateful set controller
W1206 17:13:21.226] I1206 17:13:21.225153   55659 controller_utils.go:1034] Caches are synced for persistent volume controller
W1206 17:13:21.283] I1206 17:13:21.282921   55659 controller_utils.go:1034] Caches are synced for deployment controller
... skipping 34 lines ...
I1206 17:13:22.157] Successful: --client --output json has no server info
I1206 17:13:22.160] +++ [1206 17:13:22] Testing kubectl version: compare json output using additional --short flag
I1206 17:13:22.289] Successful: --short --output client json info is equal to non short result
I1206 17:13:22.295] Successful: --short --output server json info is equal to non short result
I1206 17:13:22.298] +++ [1206 17:13:22] Testing kubectl version: compare json output with yaml output
W1206 17:13:22.398] I1206 17:13:22.375975   55659 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1206 17:13:22.399] E1206 17:13:22.389619   55659 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1206 17:13:22.421] I1206 17:13:22.420967   55659 controller_utils.go:1034] Caches are synced for garbage collector controller
W1206 17:13:22.422] I1206 17:13:22.421019   55659 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W1206 17:13:22.477] I1206 17:13:22.476596   55659 controller_utils.go:1034] Caches are synced for garbage collector controller
I1206 17:13:22.577] Successful: --output json/yaml has identical information
I1206 17:13:22.577] +++ exit code: 0
I1206 17:13:22.578] Recording: run_kubectl_config_set_tests
... skipping 42 lines ...
I1206 17:13:24.941] +++ working dir: /go/src/k8s.io/kubernetes
I1206 17:13:24.943] +++ command: run_RESTMapper_evaluation_tests
I1206 17:13:24.953] +++ [1206 17:13:24] Creating namespace namespace-1544116404-10397
I1206 17:13:25.020] namespace/namespace-1544116404-10397 created
I1206 17:13:25.085] Context "test" modified.
I1206 17:13:25.091] +++ [1206 17:13:25] Testing RESTMapper
I1206 17:13:25.199] +++ [1206 17:13:25] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1206 17:13:25.213] +++ exit code: 0
I1206 17:13:25.321] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1206 17:13:25.322] bindings                                                                      true         Binding
I1206 17:13:25.322] componentstatuses                 cs                                          false        ComponentStatus
I1206 17:13:25.322] configmaps                        cm                                          true         ConfigMap
I1206 17:13:25.322] endpoints                         ep                                          true         Endpoints
... skipping 588 lines ...
I1206 17:13:44.769] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:13:44.991] core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:13:45.112] core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:13:45.339] core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:13:45.453] core.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:13:45.573] pod "valid-pod" force deleted
W1206 17:13:45.673] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1206 17:13:45.674] error: setting 'all' parameter but found a non empty selector. 
W1206 17:13:45.674] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1206 17:13:45.775] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I1206 17:13:45.849] core.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I1206 17:13:45.953] namespace/test-kubectl-describe-pod created
I1206 17:13:46.081] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I1206 17:13:46.204] core.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I1206 17:13:47.278] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1206 17:13:47.345] poddisruptionbudget.policy/test-pdb-4 created
I1206 17:13:47.437] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1206 17:13:47.602] core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:13:47.771] pod/env-test-pod created
W1206 17:13:47.871] I1206 17:13:46.848168   52307 controller.go:608] quota admission added evaluator for: poddisruptionbudgets.policy
W1206 17:13:47.872] error: min-available and max-unavailable cannot be both specified
I1206 17:13:47.972] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I1206 17:13:47.972] Name:               env-test-pod
I1206 17:13:47.972] Namespace:          test-kubectl-describe-pod
I1206 17:13:47.972] Priority:           0
I1206 17:13:47.973] PriorityClassName:  <none>
I1206 17:13:47.973] Node:               <none>
... skipping 145 lines ...
W1206 17:13:59.061] I1206 17:13:58.540291   55659 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W1206 17:13:59.062] I1206 17:13:58.625034   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116434-23863", Name:"modified", UID:"51a9ba80-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-6d2tm
I1206 17:13:59.201] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:13:59.345] pod/valid-pod created
I1206 17:13:59.438] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:13:59.584] Successful
I1206 17:13:59.584] message:Error from server: cannot restore map from string
I1206 17:13:59.584] has:cannot restore map from string
I1206 17:13:59.671] Successful
I1206 17:13:59.671] message:pod/valid-pod patched (no change)
I1206 17:13:59.672] has:patched (no change)
I1206 17:13:59.752] pod/valid-pod patched
I1206 17:13:59.853] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I1206 17:14:00.347] pod/valid-pod patched
I1206 17:14:00.438] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1206 17:14:00.510] pod/valid-pod patched
I1206 17:14:00.598] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1206 17:14:00.757] pod/valid-pod patched
I1206 17:14:00.854] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1206 17:14:01.019] +++ [1206 17:14:01] "kubectl patch with resourceVersion 489" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W1206 17:14:01.119] E1206 17:13:59.577537   52307 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I1206 17:14:01.247] pod "valid-pod" deleted
I1206 17:14:01.257] pod/valid-pod replaced
I1206 17:14:01.349] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1206 17:14:01.510] Successful
I1206 17:14:01.511] message:error: --grace-period must have --force specified
I1206 17:14:01.511] has:\-\-grace-period must have \-\-force specified
I1206 17:14:01.662] Successful
I1206 17:14:01.663] message:error: --timeout must have --force specified
I1206 17:14:01.663] has:\-\-timeout must have \-\-force specified
I1206 17:14:01.814] node/node-v1-test created
W1206 17:14:01.915] W1206 17:14:01.814560   55659 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1206 17:14:02.016] node/node-v1-test replaced
I1206 17:14:02.063] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1206 17:14:02.136] node "node-v1-test" deleted
I1206 17:14:02.226] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1206 17:14:02.479] core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I1206 17:14:03.367] core.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 21 lines ...
I1206 17:14:04.275] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:04.281] +++ [1206 17:14:04] Creating namespace namespace-1544116444-614
W1206 17:14:04.382] Edit cancelled, no changes made.
W1206 17:14:04.597] Edit cancelled, no changes made.
W1206 17:14:04.597] Edit cancelled, no changes made.
W1206 17:14:04.597] Edit cancelled, no changes made.
W1206 17:14:04.597] error: 'name' already has a value (valid-pod), and --overwrite is false
W1206 17:14:04.598] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1206 17:14:04.698] namespace/namespace-1544116444-614 created
I1206 17:14:04.698] Context "test" modified.
I1206 17:14:04.751] core.sh:610: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:04.895] pod/redis-master created
I1206 17:14:04.899] pod/valid-pod created
... skipping 77 lines ...
I1206 17:14:11.962] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1206 17:14:11.963] +++ working dir: /go/src/k8s.io/kubernetes
I1206 17:14:11.966] +++ command: run_kubectl_create_error_tests
I1206 17:14:11.976] +++ [1206 17:14:11] Creating namespace namespace-1544116451-15635
I1206 17:14:12.042] namespace/namespace-1544116451-15635 created
I1206 17:14:12.108] Context "test" modified.
I1206 17:14:12.114] +++ [1206 17:14:12] Testing kubectl create with error
W1206 17:14:12.215] Error: required flag(s) "filename" not set
W1206 17:14:12.215] 
W1206 17:14:12.215] 
W1206 17:14:12.215] Examples:
W1206 17:14:12.215]   # Create a pod using the data in pod.json.
W1206 17:14:12.215]   kubectl create -f ./pod.json
W1206 17:14:12.215]   
... skipping 38 lines ...
W1206 17:14:12.219]   kubectl create -f FILENAME [options]
W1206 17:14:12.219] 
W1206 17:14:12.219] Use "kubectl <command> --help" for more information about a given command.
W1206 17:14:12.219] Use "kubectl options" for a list of global command-line options (applies to all commands).
W1206 17:14:12.219] 
W1206 17:14:12.219] required flag(s) "filename" not set
I1206 17:14:12.328] +++ [1206 17:14:12] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1206 17:14:12.429] kubectl convert is DEPRECATED and will be removed in a future version.
W1206 17:14:12.429] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1206 17:14:12.530] +++ exit code: 0
I1206 17:14:12.978] Recording: run_kubectl_apply_tests
I1206 17:14:12.978] Running command: run_kubectl_apply_tests
I1206 17:14:12.994] 
... skipping 13 lines ...
I1206 17:14:13.915] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I1206 17:14:14.754] deployment.extensions "test-deployment-retainkeys" deleted
I1206 17:14:14.843] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:14.988] pod/selector-test-pod created
I1206 17:14:15.075] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1206 17:14:15.153] Successful
I1206 17:14:15.153] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1206 17:14:15.154] has:pods "selector-test-pod-dont-apply" not found
I1206 17:14:15.226] pod "selector-test-pod" deleted
I1206 17:14:15.312] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:15.539] pod/test-pod created (server dry run)
I1206 17:14:15.633] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:15.779] pod/test-pod created
... skipping 6 lines ...
W1206 17:14:15.881] I1206 17:14:14.379367   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116453-14023", Name:"test-deployment-retainkeys", UID:"5ab8297e-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-deployment-retainkeys-7495cff5f to 1
W1206 17:14:15.882] I1206 17:14:14.382244   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116453-14023", Name:"test-deployment-retainkeys-7495cff5f", UID:"5b0e086b-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-7495cff5f-c7dgt
I1206 17:14:15.982] pod/test-pod configured (server dry run)
I1206 17:14:16.018] apply.sh:91: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I1206 17:14:16.089] pod "test-pod" deleted
I1206 17:14:16.299] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W1206 17:14:16.400] E1206 17:14:16.303506   52307 autoregister_controller.go:190] v1alpha1.mygroup.example.com failed with : apiservices.apiregistration.k8s.io "v1alpha1.mygroup.example.com" already exists
W1206 17:14:16.469] I1206 17:14:16.469245   52307 clientconn.go:551] parsed scheme: ""
W1206 17:14:16.470] I1206 17:14:16.469276   52307 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1206 17:14:16.470] I1206 17:14:16.469329   52307 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1206 17:14:16.470] I1206 17:14:16.469433   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:14:16.470] I1206 17:14:16.469914   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:14:16.547] I1206 17:14:16.546994   52307 controller.go:608] quota admission added evaluator for: resources.mygroup.example.com
W1206 17:14:16.627] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1206 17:14:16.728] kind.mygroup.example.com/myobj created (server dry run)
I1206 17:14:16.728] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1206 17:14:16.798] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:16.952] pod/a created
I1206 17:14:18.704] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I1206 17:14:18.787] Successful
I1206 17:14:18.787] message:Error from server (NotFound): pods "b" not found
I1206 17:14:18.787] has:pods "b" not found
I1206 17:14:18.938] pod/b created
I1206 17:14:18.951] pod/a pruned
I1206 17:14:20.632] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I1206 17:14:20.711] Successful
I1206 17:14:20.711] message:Error from server (NotFound): pods "a" not found
I1206 17:14:20.712] has:pods "a" not found
I1206 17:14:20.785] pod "b" deleted
I1206 17:14:20.874] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:21.017] pod/a created
I1206 17:14:21.107] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I1206 17:14:21.185] Successful
I1206 17:14:21.185] message:Error from server (NotFound): pods "b" not found
I1206 17:14:21.185] has:pods "b" not found
I1206 17:14:21.327] pod/b created
I1206 17:14:21.418] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I1206 17:14:21.503] apply.sh:166: Successful get pods b {{.metadata.name}}: b
I1206 17:14:21.589] pod "a" deleted
I1206 17:14:21.595] pod "b" deleted
I1206 17:14:21.803] Successful
I1206 17:14:21.804] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I1206 17:14:21.804] has:all resources selected for prune without explicitly passing --all
I1206 17:14:22.017] pod/a created
I1206 17:14:22.027] pod/b created
I1206 17:14:22.039] service/prune-svc created
I1206 17:14:23.577] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I1206 17:14:23.697] apply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 138 lines ...
I1206 17:14:36.859] Context "test" modified.
I1206 17:14:36.865] +++ [1206 17:14:36] Testing kubectl create filter
I1206 17:14:36.946] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:37.087] pod/selector-test-pod created
I1206 17:14:37.176] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1206 17:14:37.255] Successful
I1206 17:14:37.256] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1206 17:14:37.256] has:pods "selector-test-pod-dont-apply" not found
I1206 17:14:37.327] pod "selector-test-pod" deleted
I1206 17:14:37.345] +++ exit code: 0
I1206 17:14:37.840] Recording: run_kubectl_apply_deployments_tests
I1206 17:14:37.840] Running command: run_kubectl_apply_deployments_tests
I1206 17:14:37.859] 
... skipping 33 lines ...
I1206 17:14:39.745] apps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:39.814] apps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:39.895] apps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:40.041] deployment.extensions/nginx created
I1206 17:14:40.139] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I1206 17:14:44.327] Successful
I1206 17:14:44.327] message:Error from server (Conflict): error when applying patch:
I1206 17:14:44.328] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544116477-10738\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1206 17:14:44.328] to:
I1206 17:14:44.328] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I1206 17:14:44.328] Name: "nginx", Namespace: "namespace-1544116477-10738"
I1206 17:14:44.329] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["name":"nginx" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1544116477-10738/deployments/nginx" "uid":"6a59c49e-f97a-11e8-be0e-0242ac110002" "generation":'\x01' "creationTimestamp":"2018-12-06T17:14:40Z" "labels":map["name":"nginx"] "namespace":"namespace-1544116477-10738" "resourceVersion":"709" "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544116477-10738\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"]] "spec":map["replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e']] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647)] "status":map["replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["status":"False" "lastUpdateTime":"2018-12-06T17:14:40Z" "lastTransitionTime":"2018-12-06T17:14:40Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available"]] "observedGeneration":'\x01']]}
I1206 17:14:44.330] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I1206 17:14:44.330] has:Error from server (Conflict)
W1206 17:14:44.430] I1206 17:14:40.044581   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116477-10738", Name:"nginx", UID:"6a59c49e-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W1206 17:14:44.431] I1206 17:14:40.048140   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116477-10738", Name:"nginx-5d56d6b95f", UID:"6a5a4a83-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-vtcmw
W1206 17:14:44.431] I1206 17:14:40.050334   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116477-10738", Name:"nginx-5d56d6b95f", UID:"6a5a4a83-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-b8mq4
W1206 17:14:44.431] I1206 17:14:40.050953   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116477-10738", Name:"nginx-5d56d6b95f", UID:"6a5a4a83-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-zz4sf
I1206 17:14:49.521] deployment.extensions/nginx configured
I1206 17:14:49.611] Successful
I1206 17:14:49.611] message:        "name": "nginx2"
I1206 17:14:49.612]           "name": "nginx2"
I1206 17:14:49.612] has:"name": "nginx2"
W1206 17:14:49.712] I1206 17:14:49.523447   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116477-10738", Name:"nginx", UID:"6fffff5f-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W1206 17:14:49.713] I1206 17:14:49.527322   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116477-10738", Name:"nginx-7777658b9d", UID:"70008d1b-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-m9bdb
W1206 17:14:49.713] I1206 17:14:49.529824   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116477-10738", Name:"nginx-7777658b9d", UID:"70008d1b-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-z9vr8
W1206 17:14:49.713] I1206 17:14:49.530999   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116477-10738", Name:"nginx-7777658b9d", UID:"70008d1b-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-sfrgv
W1206 17:14:53.823] E1206 17:14:53.822878   55659 replica_set.go:450] Sync "namespace-1544116477-10738/nginx-7777658b9d" failed with Operation cannot be fulfilled on replicasets.apps "nginx-7777658b9d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544116477-10738/nginx-7777658b9d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 70008d1b-f97a-11e8-be0e-0242ac110002, UID in object meta: 
W1206 17:14:53.826] E1206 17:14:53.825591   55659 replica_set.go:450] Sync "namespace-1544116477-10738/nginx-7777658b9d" failed with replicasets.apps "nginx-7777658b9d" not found
W1206 17:14:54.815] I1206 17:14:54.815238   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116477-10738", Name:"nginx", UID:"73277c77-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"764", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W1206 17:14:54.819] I1206 17:14:54.818734   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116477-10738", Name:"nginx-7777658b9d", UID:"732808ac-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-9msjm
W1206 17:14:54.821] I1206 17:14:54.820625   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116477-10738", Name:"nginx-7777658b9d", UID:"732808ac-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-vxdv7
W1206 17:14:54.821] I1206 17:14:54.821129   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116477-10738", Name:"nginx-7777658b9d", UID:"732808ac-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-xm9s2
I1206 17:14:54.921] Successful
I1206 17:14:54.922] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 73 lines ...
I1206 17:14:55.962] +++ [1206 17:14:55] Creating namespace namespace-1544116495-7514
I1206 17:14:56.027] namespace/namespace-1544116495-7514 created
I1206 17:14:56.089] Context "test" modified.
I1206 17:14:56.095] +++ [1206 17:14:56] Testing kubectl get
I1206 17:14:56.177] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:56.255] Successful
I1206 17:14:56.255] message:Error from server (NotFound): pods "abc" not found
I1206 17:14:56.255] has:pods "abc" not found
I1206 17:14:56.335] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:56.414] Successful
I1206 17:14:56.415] message:Error from server (NotFound): pods "abc" not found
I1206 17:14:56.415] has:pods "abc" not found
I1206 17:14:56.494] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:56.568] Successful
I1206 17:14:56.569] message:{
I1206 17:14:56.569]     "apiVersion": "v1",
I1206 17:14:56.569]     "items": [],
... skipping 23 lines ...
I1206 17:14:56.869] has not:No resources found
I1206 17:14:56.942] Successful
I1206 17:14:56.943] message:NAME
I1206 17:14:56.943] has not:No resources found
I1206 17:14:57.023] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:57.122] Successful
I1206 17:14:57.122] message:error: the server doesn't have a resource type "foobar"
I1206 17:14:57.122] has not:No resources found
I1206 17:14:57.195] Successful
I1206 17:14:57.195] message:No resources found.
I1206 17:14:57.195] has:No resources found
I1206 17:14:57.271] Successful
I1206 17:14:57.271] message:
I1206 17:14:57.271] has not:No resources found
I1206 17:14:57.344] Successful
I1206 17:14:57.344] message:No resources found.
I1206 17:14:57.344] has:No resources found
I1206 17:14:57.425] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:14:57.503] Successful
I1206 17:14:57.503] message:Error from server (NotFound): pods "abc" not found
I1206 17:14:57.503] has:pods "abc" not found
I1206 17:14:57.505] FAIL!
I1206 17:14:57.505] message:Error from server (NotFound): pods "abc" not found
I1206 17:14:57.505] has not:List
I1206 17:14:57.506] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1206 17:14:57.612] Successful
I1206 17:14:57.612] message:I1206 17:14:57.562949   67830 loader.go:359] Config loaded from file /tmp/tmp.l9SMrMZsyk/.kube/config
I1206 17:14:57.612] I1206 17:14:57.563517   67830 loader.go:359] Config loaded from file /tmp/tmp.l9SMrMZsyk/.kube/config
I1206 17:14:57.612] I1206 17:14:57.564930   67830 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I1206 17:15:00.989] }
I1206 17:15:01.074] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:15:01.295] <no value>Successful
I1206 17:15:01.296] message:valid-pod:
I1206 17:15:01.296] has:valid-pod:
I1206 17:15:01.370] Successful
I1206 17:15:01.370] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1206 17:15:01.370] 	template was:
I1206 17:15:01.370] 		{.missing}
I1206 17:15:01.370] 	object given to jsonpath engine was:
I1206 17:15:01.371] 		map[string]interface {}{"spec":map[string]interface {}{"terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}, "restartPolicy":"Always"}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"valid-pod", "namespace":"namespace-1544116500-10356", "selfLink":"/api/v1/namespaces/namespace-1544116500-10356/pods/valid-pod", "uid":"76c94a7d-f97a-11e8-be0e-0242ac110002", "resourceVersion":"801", "creationTimestamp":"2018-12-06T17:15:00Z", "labels":map[string]interface {}{"name":"valid-pod"}}}
I1206 17:15:01.371] has:missing is not found
I1206 17:15:01.446] Successful
I1206 17:15:01.446] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1206 17:15:01.446] 	template was:
I1206 17:15:01.447] 		{{.missing}}
I1206 17:15:01.447] 	raw data was:
I1206 17:15:01.447] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2018-12-06T17:15:00Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1544116500-10356","resourceVersion":"801","selfLink":"/api/v1/namespaces/namespace-1544116500-10356/pods/valid-pod","uid":"76c94a7d-f97a-11e8-be0e-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1206 17:15:01.447] 	object given to template engine was:
I1206 17:15:01.448] 		map[metadata:map[uid:76c94a7d-f97a-11e8-be0e-0242ac110002 creationTimestamp:2018-12-06T17:15:00Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1544116500-10356 resourceVersion:801 selfLink:/api/v1/namespaces/namespace-1544116500-10356/pods/valid-pod] spec:map[priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[memory:512Mi cpu:1]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always]] dnsPolicy:ClusterFirst enableServiceLinks:true] status:map[phase:Pending qosClass:Guaranteed] apiVersion:v1 kind:Pod]
I1206 17:15:01.448] has:map has no entry for key "missing"
W1206 17:15:01.548] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W1206 17:15:02.520] E1206 17:15:02.520109   68217 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I1206 17:15:02.621] Successful
I1206 17:15:02.621] message:NAME        READY   STATUS    RESTARTS   AGE
I1206 17:15:02.621] valid-pod   0/1     Pending   0          1s
I1206 17:15:02.621] has:STATUS
I1206 17:15:02.622] Successful
... skipping 80 lines ...
I1206 17:15:04.782]   terminationGracePeriodSeconds: 30
I1206 17:15:04.782] status:
I1206 17:15:04.783]   phase: Pending
I1206 17:15:04.783]   qosClass: Guaranteed
I1206 17:15:04.783] has:name: valid-pod
I1206 17:15:04.783] Successful
I1206 17:15:04.783] message:Error from server (NotFound): pods "invalid-pod" not found
I1206 17:15:04.783] has:"invalid-pod" not found
I1206 17:15:04.836] pod "valid-pod" deleted
I1206 17:15:04.921] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:05.067] pod/redis-master created
I1206 17:15:05.071] pod/valid-pod created
I1206 17:15:05.156] Successful
... skipping 305 lines ...
I1206 17:15:09.165] Running command: run_create_secret_tests
I1206 17:15:09.182] 
I1206 17:15:09.184] +++ Running case: test-cmd.run_create_secret_tests 
I1206 17:15:09.187] +++ working dir: /go/src/k8s.io/kubernetes
I1206 17:15:09.189] +++ command: run_create_secret_tests
I1206 17:15:09.278] Successful
I1206 17:15:09.278] message:Error from server (NotFound): secrets "mysecret" not found
I1206 17:15:09.278] has:secrets "mysecret" not found
W1206 17:15:09.379] I1206 17:15:08.368015   52307 clientconn.go:551] parsed scheme: ""
W1206 17:15:09.379] I1206 17:15:08.368050   52307 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1206 17:15:09.379] I1206 17:15:08.368090   52307 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1206 17:15:09.379] I1206 17:15:08.368134   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:15:09.380] I1206 17:15:08.368578   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:15:09.380] No resources found.
W1206 17:15:09.380] No resources found.
I1206 17:15:09.480] Successful
I1206 17:15:09.481] message:Error from server (NotFound): secrets "mysecret" not found
I1206 17:15:09.481] has:secrets "mysecret" not found
I1206 17:15:09.481] Successful
I1206 17:15:09.481] message:user-specified
I1206 17:15:09.481] has:user-specified
I1206 17:15:09.491] Successful
I1206 17:15:09.568] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"7bf2f352-f97a-11e8-be0e-0242ac110002","resourceVersion":"875","creationTimestamp":"2018-12-06T17:15:09Z"}}
... skipping 80 lines ...
I1206 17:15:11.414] has:Timeout exceeded while reading body
I1206 17:15:11.490] Successful
I1206 17:15:11.490] message:NAME        READY   STATUS    RESTARTS   AGE
I1206 17:15:11.491] valid-pod   0/1     Pending   0          1s
I1206 17:15:11.491] has:valid-pod
I1206 17:15:11.555] Successful
I1206 17:15:11.555] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1206 17:15:11.555] has:Invalid timeout value
I1206 17:15:11.630] pod "valid-pod" deleted
I1206 17:15:11.648] +++ exit code: 0
I1206 17:15:11.680] Recording: run_crd_tests
I1206 17:15:11.680] Running command: run_crd_tests
I1206 17:15:11.697] 
... skipping 26 lines ...
I1206 17:15:13.672] Successful
I1206 17:15:13.672] message:kind.mygroup.example.com/myobj
I1206 17:15:13.672] has:kind.mygroup.example.com/myobj
I1206 17:15:13.747] Successful
I1206 17:15:13.748] message:kind.mygroup.example.com/myobj
I1206 17:15:13.748] has:kind.mygroup.example.com/myobj
W1206 17:15:13.848] E1206 17:15:12.494906   52307 autoregister_controller.go:190] v1alpha1.mygroup.example.com failed with : apiservices.apiregistration.k8s.io "v1alpha1.mygroup.example.com" already exists
W1206 17:15:13.848] I1206 17:15:13.097270   52307 clientconn.go:551] parsed scheme: ""
W1206 17:15:13.848] I1206 17:15:13.097306   52307 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1206 17:15:13.849] I1206 17:15:13.097347   52307 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1206 17:15:13.849] I1206 17:15:13.097407   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:15:13.849] I1206 17:15:13.098064   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:15:13.849] I1206 17:15:13.172767   52307 clientconn.go:551] parsed scheme: ""
... skipping 128 lines ...
I1206 17:15:15.686] foo.company.com/test patched
I1206 17:15:15.769] crd.sh:237: Successful get foos/test {{.patched}}: value1
I1206 17:15:15.849] foo.company.com/test patched
I1206 17:15:15.935] crd.sh:239: Successful get foos/test {{.patched}}: value2
I1206 17:15:16.011] foo.company.com/test patched
I1206 17:15:16.103] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I1206 17:15:16.252] +++ [1206 17:15:16] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1206 17:15:16.312] {
I1206 17:15:16.312]     "apiVersion": "company.com/v1",
I1206 17:15:16.312]     "kind": "Foo",
I1206 17:15:16.312]     "metadata": {
I1206 17:15:16.312]         "annotations": {
I1206 17:15:16.312]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W1206 17:15:17.808] I1206 17:15:14.140415   52307 controller.go:608] quota admission added evaluator for: foos.company.com
W1206 17:15:17.809] I1206 17:15:17.454626   52307 controller.go:608] quota admission added evaluator for: bars.company.com
W1206 17:15:17.809] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 70720 Killed                  while [ ${tries} -lt 10 ]; do
W1206 17:15:17.809]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W1206 17:15:17.809] done
W1206 17:15:17.809] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 70719 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W1206 17:15:22.515] E1206 17:15:22.514168   55659 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources"]
W1206 17:15:22.701] I1206 17:15:22.701418   55659 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1206 17:15:22.703] I1206 17:15:22.703191   52307 clientconn.go:551] parsed scheme: ""
W1206 17:15:22.703] I1206 17:15:22.703221   52307 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1206 17:15:22.703] I1206 17:15:22.703269   52307 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1206 17:15:22.704] I1206 17:15:22.703476   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:15:22.704] I1206 17:15:22.703838   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 81 lines ...
I1206 17:15:34.288] +++ [1206 17:15:34] Testing cmd with image
I1206 17:15:34.370] Successful
I1206 17:15:34.370] message:deployment.apps/test1 created
I1206 17:15:34.370] has:deployment.apps/test1 created
I1206 17:15:34.438] deployment.extensions "test1" deleted
I1206 17:15:34.506] Successful
I1206 17:15:34.506] message:error: Invalid image name "InvalidImageName": invalid reference format
I1206 17:15:34.506] has:error: Invalid image name "InvalidImageName": invalid reference format
I1206 17:15:34.519] +++ exit code: 0
I1206 17:15:34.554] Recording: run_recursive_resources_tests
I1206 17:15:34.554] Running command: run_recursive_resources_tests
I1206 17:15:34.572] 
I1206 17:15:34.574] +++ Running case: test-cmd.run_recursive_resources_tests 
I1206 17:15:34.577] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I1206 17:15:34.718] Context "test" modified.
I1206 17:15:34.802] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:35.028] generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:35.030] Successful
I1206 17:15:35.030] message:pod/busybox0 created
I1206 17:15:35.030] pod/busybox1 created
I1206 17:15:35.030] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1206 17:15:35.030] has:error validating data: kind not set
I1206 17:15:35.115] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:35.271] generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1206 17:15:35.273] Successful
I1206 17:15:35.274] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:35.274] has:Object 'Kind' is missing
I1206 17:15:35.361] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:35.595] generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1206 17:15:35.597] Successful
I1206 17:15:35.597] message:pod/busybox0 replaced
I1206 17:15:35.597] pod/busybox1 replaced
I1206 17:15:35.598] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1206 17:15:35.598] has:error validating data: kind not set
I1206 17:15:35.683] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:35.771] Successful
I1206 17:15:35.771] message:Name:               busybox0
I1206 17:15:35.771] Namespace:          namespace-1544116534-25480
I1206 17:15:35.772] Priority:           0
I1206 17:15:35.772] PriorityClassName:  <none>
... skipping 159 lines ...
I1206 17:15:35.782] has:Object 'Kind' is missing
I1206 17:15:35.861] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:36.025] generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1206 17:15:36.027] Successful
I1206 17:15:36.027] message:pod/busybox0 annotated
I1206 17:15:36.027] pod/busybox1 annotated
I1206 17:15:36.028] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:36.028] has:Object 'Kind' is missing
I1206 17:15:36.111] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:36.350] generic-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1206 17:15:36.352] Successful
I1206 17:15:36.352] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1206 17:15:36.352] pod/busybox0 configured
I1206 17:15:36.352] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1206 17:15:36.353] pod/busybox1 configured
I1206 17:15:36.353] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1206 17:15:36.353] has:error validating data: kind not set
I1206 17:15:36.434] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:36.570] deployment.extensions/nginx created
I1206 17:15:36.663] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1206 17:15:36.745] generic-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 17:15:36.894] generic-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I1206 17:15:36.896] Successful
... skipping 42 lines ...
I1206 17:15:36.969] deployment.extensions "nginx" deleted
I1206 17:15:37.058] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:37.205] generic-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:37.207] Successful
I1206 17:15:37.207] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1206 17:15:37.207] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1206 17:15:37.207] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:37.207] has:Object 'Kind' is missing
I1206 17:15:37.291] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:37.370] Successful
I1206 17:15:37.370] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:37.371] has:busybox0:busybox1:
I1206 17:15:37.372] Successful
I1206 17:15:37.372] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:37.372] has:Object 'Kind' is missing
I1206 17:15:37.456] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:37.539] pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:37.618] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1206 17:15:37.620] Successful
I1206 17:15:37.620] message:pod/busybox0 labeled
I1206 17:15:37.620] pod/busybox1 labeled
I1206 17:15:37.621] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:37.621] has:Object 'Kind' is missing
I1206 17:15:37.703] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:37.781] pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:37.863] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1206 17:15:37.865] Successful
I1206 17:15:37.865] message:pod/busybox0 patched
I1206 17:15:37.866] pod/busybox1 patched
I1206 17:15:37.866] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:37.866] has:Object 'Kind' is missing
I1206 17:15:37.949] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:38.112] generic-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:38.114] Successful
I1206 17:15:38.114] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1206 17:15:38.114] pod "busybox0" force deleted
I1206 17:15:38.114] pod "busybox1" force deleted
I1206 17:15:38.115] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1206 17:15:38.115] has:Object 'Kind' is missing
I1206 17:15:38.195] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:38.333] replicationcontroller/busybox0 created
I1206 17:15:38.337] replicationcontroller/busybox1 created
I1206 17:15:38.429] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:38.515] generic-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:38.594] generic-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I1206 17:15:38.675] generic-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I1206 17:15:38.839] generic-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1206 17:15:38.919] generic-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1206 17:15:38.921] Successful
I1206 17:15:38.921] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1206 17:15:38.921] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1206 17:15:38.921] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:38.922] has:Object 'Kind' is missing
I1206 17:15:38.990] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1206 17:15:39.066] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1206 17:15:39.151] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:39.231] generic-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I1206 17:15:39.312] generic-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I1206 17:15:39.479] generic-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1206 17:15:39.558] generic-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1206 17:15:39.560] Successful
I1206 17:15:39.560] message:service/busybox0 exposed
I1206 17:15:39.561] service/busybox1 exposed
I1206 17:15:39.561] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:39.561] has:Object 'Kind' is missing
I1206 17:15:39.641] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:39.724] generic-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I1206 17:15:39.805] generic-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I1206 17:15:39.979] generic-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I1206 17:15:40.059] generic-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I1206 17:15:40.061] Successful
I1206 17:15:40.061] message:replicationcontroller/busybox0 scaled
I1206 17:15:40.061] replicationcontroller/busybox1 scaled
I1206 17:15:40.062] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:40.062] has:Object 'Kind' is missing
I1206 17:15:40.143] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1206 17:15:40.304] generic-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:40.306] Successful
I1206 17:15:40.307] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1206 17:15:40.307] replicationcontroller "busybox0" force deleted
I1206 17:15:40.307] replicationcontroller "busybox1" force deleted
I1206 17:15:40.307] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:40.307] has:Object 'Kind' is missing
I1206 17:15:40.389] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:40.526] deployment.extensions/nginx1-deployment created
I1206 17:15:40.529] deployment.extensions/nginx0-deployment created
I1206 17:15:40.627] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1206 17:15:40.710] generic-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1206 17:15:40.895] generic-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1206 17:15:40.897] Successful
I1206 17:15:40.898] message:deployment.extensions/nginx1-deployment skipped rollback (current template already matches revision 1)
I1206 17:15:40.898] deployment.extensions/nginx0-deployment skipped rollback (current template already matches revision 1)
I1206 17:15:40.898] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 17:15:40.898] has:Object 'Kind' is missing
I1206 17:15:40.976] deployment.extensions/nginx1-deployment paused
I1206 17:15:40.979] deployment.extensions/nginx0-deployment paused
I1206 17:15:41.071] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1206 17:15:41.073] Successful
I1206 17:15:41.074] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 17:15:41.074] has:Object 'Kind' is missing
I1206 17:15:41.153] deployment.extensions/nginx1-deployment resumed
I1206 17:15:41.156] deployment.extensions/nginx0-deployment resumed
I1206 17:15:41.245] generic-resources.sh:408: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I1206 17:15:41.247] Successful
I1206 17:15:41.248] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 17:15:41.248] has:Object 'Kind' is missing
W1206 17:15:41.348] Error from server (NotFound): namespaces "non-native-resources" not found
W1206 17:15:41.349] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1206 17:15:41.349] I1206 17:15:34.361527   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116534-27230", Name:"test1", UID:"8ab9cb03-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"982", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W1206 17:15:41.349] I1206 17:15:34.366599   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116534-27230", Name:"test1-fb488bd5d", UID:"8aba4f64-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-2xcl8
W1206 17:15:41.349] I1206 17:15:36.573801   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116534-25480", Name:"nginx", UID:"8c0b5221-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W1206 17:15:41.350] I1206 17:15:36.576575   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116534-25480", Name:"nginx-6f6bb85d9c", UID:"8c0bdfb3-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1008", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-t7hzg
W1206 17:15:41.350] I1206 17:15:36.578744   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116534-25480", Name:"nginx-6f6bb85d9c", UID:"8c0bdfb3-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1008", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-zvbkc
W1206 17:15:41.350] I1206 17:15:36.578883   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116534-25480", Name:"nginx-6f6bb85d9c", UID:"8c0bdfb3-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1008", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-2zsll
W1206 17:15:41.350] kubectl convert is DEPRECATED and will be removed in a future version.
W1206 17:15:41.350] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1206 17:15:41.351] I1206 17:15:38.335754   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116534-25480", Name:"busybox0", UID:"8d184373-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1038", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-6tkpm
W1206 17:15:41.351] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1206 17:15:41.351] I1206 17:15:38.339363   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116534-25480", Name:"busybox1", UID:"8d18fc2e-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1040", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jpzwp
W1206 17:15:41.351] I1206 17:15:38.533167   55659 namespace_controller.go:171] Namespace has been deleted non-native-resources
W1206 17:15:41.351] I1206 17:15:39.891129   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116534-25480", Name:"busybox0", UID:"8d184373-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-hdjgx
W1206 17:15:41.352] I1206 17:15:39.897835   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116534-25480", Name:"busybox1", UID:"8d18fc2e-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1063", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-mqdr6
W1206 17:15:41.352] I1206 17:15:40.528987   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116534-25480", Name:"nginx1-deployment", UID:"8e66dcf4-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1080", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W1206 17:15:41.352] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1206 17:15:41.352] I1206 17:15:40.532334   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116534-25480", Name:"nginx1-deployment-75f6fc6747", UID:"8e676e9f-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1081", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-9qxrq
W1206 17:15:41.353] I1206 17:15:40.532335   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116534-25480", Name:"nginx0-deployment", UID:"8e678273-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1082", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W1206 17:15:41.353] I1206 17:15:40.535251   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116534-25480", Name:"nginx1-deployment-75f6fc6747", UID:"8e676e9f-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1081", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-wl59h
W1206 17:15:41.353] I1206 17:15:40.537918   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116534-25480", Name:"nginx0-deployment-b6bb4ccbb", UID:"8e67f168-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-g5vvn
W1206 17:15:41.353] I1206 17:15:40.540932   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116534-25480", Name:"nginx0-deployment-b6bb4ccbb", UID:"8e67f168-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-v7ks6
W1206 17:15:41.415] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1206 17:15:41.430] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 17:15:41.530] Successful
I1206 17:15:41.530] message:deployment.extensions/nginx1-deployment 
I1206 17:15:41.530] REVISION  CHANGE-CAUSE
I1206 17:15:41.530] 1         <none>
I1206 17:15:41.531] 
I1206 17:15:41.531] deployment.extensions/nginx0-deployment 
I1206 17:15:41.531] REVISION  CHANGE-CAUSE
I1206 17:15:41.531] 1         <none>
I1206 17:15:41.531] 
I1206 17:15:41.531] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 17:15:41.531] has:nginx0-deployment
I1206 17:15:41.531] Successful
I1206 17:15:41.531] message:deployment.extensions/nginx1-deployment 
I1206 17:15:41.531] REVISION  CHANGE-CAUSE
I1206 17:15:41.531] 1         <none>
I1206 17:15:41.532] 
I1206 17:15:41.532] deployment.extensions/nginx0-deployment 
I1206 17:15:41.532] REVISION  CHANGE-CAUSE
I1206 17:15:41.532] 1         <none>
I1206 17:15:41.532] 
I1206 17:15:41.532] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 17:15:41.532] has:nginx1-deployment
I1206 17:15:41.532] Successful
I1206 17:15:41.532] message:deployment.extensions/nginx1-deployment 
I1206 17:15:41.532] REVISION  CHANGE-CAUSE
I1206 17:15:41.532] 1         <none>
I1206 17:15:41.532] 
I1206 17:15:41.533] deployment.extensions/nginx0-deployment 
I1206 17:15:41.533] REVISION  CHANGE-CAUSE
I1206 17:15:41.533] 1         <none>
I1206 17:15:41.533] 
I1206 17:15:41.533] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1206 17:15:41.533] has:Object 'Kind' is missing
I1206 17:15:41.533] deployment.extensions "nginx1-deployment" force deleted
I1206 17:15:41.533] deployment.extensions "nginx0-deployment" force deleted
I1206 17:15:42.517] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:42.653] replicationcontroller/busybox0 created
I1206 17:15:42.656] replicationcontroller/busybox1 created
... skipping 7 lines ...
I1206 17:15:42.832] message:no rollbacker has been implemented for "ReplicationController"
I1206 17:15:42.832] no rollbacker has been implemented for "ReplicationController"
I1206 17:15:42.832] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:42.832] has:Object 'Kind' is missing
I1206 17:15:42.915] Successful
I1206 17:15:42.915] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:42.916] error: replicationcontrollers "busybox0" pausing is not supported
I1206 17:15:42.916] error: replicationcontrollers "busybox1" pausing is not supported
I1206 17:15:42.916] has:Object 'Kind' is missing
I1206 17:15:42.917] Successful
I1206 17:15:42.917] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:42.917] error: replicationcontrollers "busybox0" pausing is not supported
I1206 17:15:42.918] error: replicationcontrollers "busybox1" pausing is not supported
I1206 17:15:42.918] has:replicationcontrollers "busybox0" pausing is not supported
I1206 17:15:42.919] Successful
I1206 17:15:42.919] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:42.919] error: replicationcontrollers "busybox0" pausing is not supported
I1206 17:15:42.919] error: replicationcontrollers "busybox1" pausing is not supported
I1206 17:15:42.920] has:replicationcontrollers "busybox1" pausing is not supported
I1206 17:15:42.998] Successful
I1206 17:15:42.999] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:42.999] error: replicationcontrollers "busybox0" resuming is not supported
I1206 17:15:42.999] error: replicationcontrollers "busybox1" resuming is not supported
I1206 17:15:42.999] has:Object 'Kind' is missing
I1206 17:15:43.000] Successful
I1206 17:15:43.000] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:43.000] error: replicationcontrollers "busybox0" resuming is not supported
I1206 17:15:43.001] error: replicationcontrollers "busybox1" resuming is not supported
I1206 17:15:43.001] has:replicationcontrollers "busybox0" resuming is not supported
I1206 17:15:43.002] Successful
I1206 17:15:43.003] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:43.003] error: replicationcontrollers "busybox0" resuming is not supported
I1206 17:15:43.003] error: replicationcontrollers "busybox1" resuming is not supported
I1206 17:15:43.003] has:replicationcontrollers "busybox0" resuming is not supported
I1206 17:15:43.071] replicationcontroller "busybox0" force deleted
I1206 17:15:43.076] replicationcontroller "busybox1" force deleted
W1206 17:15:43.176] I1206 17:15:42.656325   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116534-25480", Name:"busybox0", UID:"8fab88f5-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1125", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-vrv22
W1206 17:15:43.177] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1206 17:15:43.177] I1206 17:15:42.659227   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116534-25480", Name:"busybox1", UID:"8fac29e7-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1127", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-bf46c
W1206 17:15:43.177] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1206 17:15:43.177] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1206 17:15:44.094] +++ exit code: 0
I1206 17:15:44.126] Recording: run_namespace_tests
I1206 17:15:44.126] Running command: run_namespace_tests
I1206 17:15:44.144] 
I1206 17:15:44.146] +++ Running case: test-cmd.run_namespace_tests 
I1206 17:15:44.148] +++ working dir: /go/src/k8s.io/kubernetes
I1206 17:15:44.150] +++ command: run_namespace_tests
I1206 17:15:44.158] +++ [1206 17:15:44] Testing kubectl(v1:namespaces)
I1206 17:15:44.223] namespace/my-namespace created
I1206 17:15:44.304] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1206 17:15:44.373] namespace "my-namespace" deleted
I1206 17:15:49.479] namespace/my-namespace condition met
I1206 17:15:49.556] Successful
I1206 17:15:49.556] message:Error from server (NotFound): namespaces "my-namespace" not found
I1206 17:15:49.557] has: not found
I1206 17:15:49.656] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I1206 17:15:49.721] namespace/other created
I1206 17:15:49.805] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I1206 17:15:49.887] core.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:50.021] pod/valid-pod created
I1206 17:15:50.125] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:15:50.205] core.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:15:50.278] Successful
I1206 17:15:50.278] message:error: a resource cannot be retrieved by name across all namespaces
I1206 17:15:50.278] has:a resource cannot be retrieved by name across all namespaces
I1206 17:15:50.358] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1206 17:15:50.426] pod "valid-pod" force deleted
I1206 17:15:50.512] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:15:50.583] namespace "other" deleted
W1206 17:15:50.684] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1206 17:15:52.520] E1206 17:15:52.519530   55659 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1206 17:15:52.823] I1206 17:15:52.823071   55659 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1206 17:15:52.924] I1206 17:15:52.923365   55659 controller_utils.go:1034] Caches are synced for garbage collector controller
W1206 17:15:53.750] I1206 17:15:53.749858   55659 horizontal.go:309] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1544116534-25480
W1206 17:15:53.754] I1206 17:15:53.753986   55659 horizontal.go:309] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1544116534-25480
W1206 17:15:54.476] I1206 17:15:54.475939   55659 namespace_controller.go:171] Namespace has been deleted my-namespace
I1206 17:15:55.697] +++ exit code: 0
... skipping 113 lines ...
I1206 17:16:10.507] +++ command: run_client_config_tests
I1206 17:16:10.519] +++ [1206 17:16:10] Creating namespace namespace-1544116570-11791
I1206 17:16:10.584] namespace/namespace-1544116570-11791 created
I1206 17:16:10.647] Context "test" modified.
I1206 17:16:10.652] +++ [1206 17:16:10] Testing client config
I1206 17:16:10.716] Successful
I1206 17:16:10.716] message:error: stat missing: no such file or directory
I1206 17:16:10.716] has:missing: no such file or directory
I1206 17:16:10.779] Successful
I1206 17:16:10.780] message:error: stat missing: no such file or directory
I1206 17:16:10.780] has:missing: no such file or directory
I1206 17:16:10.837] Successful
I1206 17:16:10.838] message:error: stat missing: no such file or directory
I1206 17:16:10.838] has:missing: no such file or directory
I1206 17:16:10.897] Successful
I1206 17:16:10.897] message:Error in configuration: context was not found for specified context: missing-context
I1206 17:16:10.897] has:context was not found for specified context: missing-context
I1206 17:16:10.957] Successful
I1206 17:16:10.957] message:error: no server found for cluster "missing-cluster"
I1206 17:16:10.958] has:no server found for cluster "missing-cluster"
I1206 17:16:11.021] Successful
I1206 17:16:11.021] message:error: auth info "missing-user" does not exist
I1206 17:16:11.022] has:auth info "missing-user" does not exist
I1206 17:16:11.142] Successful
I1206 17:16:11.142] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1206 17:16:11.142] has:Error loading config file
I1206 17:16:11.203] Successful
I1206 17:16:11.204] message:error: stat missing-config: no such file or directory
I1206 17:16:11.204] has:no such file or directory
I1206 17:16:11.216] +++ exit code: 0
I1206 17:16:11.245] Recording: run_service_accounts_tests
I1206 17:16:11.245] Running command: run_service_accounts_tests
I1206 17:16:11.262] 
I1206 17:16:11.264] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 76 lines ...
I1206 17:16:18.292]                 job-name=test-job
I1206 17:16:18.292]                 run=pi
I1206 17:16:18.292] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1206 17:16:18.292] Parallelism:    1
I1206 17:16:18.292] Completions:    1
I1206 17:16:18.292] Start Time:     Thu, 06 Dec 2018 17:16:18 +0000
I1206 17:16:18.292] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1206 17:16:18.292] Pod Template:
I1206 17:16:18.292]   Labels:  controller-uid=a4c59fbf-f97a-11e8-be0e-0242ac110002
I1206 17:16:18.293]            job-name=test-job
I1206 17:16:18.293]            run=pi
I1206 17:16:18.293]   Containers:
I1206 17:16:18.293]    pi:
... skipping 329 lines ...
I1206 17:16:27.476]   selector:
I1206 17:16:27.476]     role: padawan
I1206 17:16:27.476]   sessionAffinity: None
I1206 17:16:27.476]   type: ClusterIP
I1206 17:16:27.476] status:
I1206 17:16:27.476]   loadBalancer: {}
W1206 17:16:27.576] error: you must specify resources by --filename when --local is set.
W1206 17:16:27.577] Example resource specifications include:
W1206 17:16:27.577]    '-f rsrc.yaml'
W1206 17:16:27.577]    '--filename=rsrc.json'
I1206 17:16:27.677] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1206 17:16:27.779] core.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1206 17:16:27.855] service "redis-master" deleted
... skipping 93 lines ...
I1206 17:16:33.278] apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 17:16:33.362] apps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1206 17:16:33.463] daemonset.extensions/bind rolled back
I1206 17:16:33.553] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1206 17:16:33.636] apps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1206 17:16:33.732] Successful
I1206 17:16:33.732] message:error: unable to find specified revision 1000000 in history
I1206 17:16:33.732] has:unable to find specified revision
I1206 17:16:33.824] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1206 17:16:33.913] apps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1206 17:16:34.022] daemonset.extensions/bind rolled back
I1206 17:16:34.110] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1206 17:16:34.200] apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I1206 17:16:35.431] Namespace:    namespace-1544116594-13797
I1206 17:16:35.431] Selector:     app=guestbook,tier=frontend
I1206 17:16:35.431] Labels:       app=guestbook
I1206 17:16:35.431]               tier=frontend
I1206 17:16:35.431] Annotations:  <none>
I1206 17:16:35.431] Replicas:     3 current / 3 desired
I1206 17:16:35.431] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:16:35.431] Pod Template:
I1206 17:16:35.432]   Labels:  app=guestbook
I1206 17:16:35.432]            tier=frontend
I1206 17:16:35.432]   Containers:
I1206 17:16:35.432]    php-redis:
I1206 17:16:35.432]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1206 17:16:35.532] Namespace:    namespace-1544116594-13797
I1206 17:16:35.533] Selector:     app=guestbook,tier=frontend
I1206 17:16:35.533] Labels:       app=guestbook
I1206 17:16:35.533]               tier=frontend
I1206 17:16:35.533] Annotations:  <none>
I1206 17:16:35.533] Replicas:     3 current / 3 desired
I1206 17:16:35.533] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:16:35.533] Pod Template:
I1206 17:16:35.533]   Labels:  app=guestbook
I1206 17:16:35.533]            tier=frontend
I1206 17:16:35.533]   Containers:
I1206 17:16:35.533]    php-redis:
I1206 17:16:35.533]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1206 17:16:35.633] Namespace:    namespace-1544116594-13797
I1206 17:16:35.633] Selector:     app=guestbook,tier=frontend
I1206 17:16:35.634] Labels:       app=guestbook
I1206 17:16:35.634]               tier=frontend
I1206 17:16:35.634] Annotations:  <none>
I1206 17:16:35.634] Replicas:     3 current / 3 desired
I1206 17:16:35.634] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:16:35.634] Pod Template:
I1206 17:16:35.634]   Labels:  app=guestbook
I1206 17:16:35.634]            tier=frontend
I1206 17:16:35.634]   Containers:
I1206 17:16:35.634]    php-redis:
I1206 17:16:35.634]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1206 17:16:35.737] Namespace:    namespace-1544116594-13797
I1206 17:16:35.737] Selector:     app=guestbook,tier=frontend
I1206 17:16:35.737] Labels:       app=guestbook
I1206 17:16:35.737]               tier=frontend
I1206 17:16:35.737] Annotations:  <none>
I1206 17:16:35.737] Replicas:     3 current / 3 desired
I1206 17:16:35.737] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:16:35.737] Pod Template:
I1206 17:16:35.737]   Labels:  app=guestbook
I1206 17:16:35.737]            tier=frontend
I1206 17:16:35.738]   Containers:
I1206 17:16:35.738]    php-redis:
I1206 17:16:35.738]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 24 lines ...
I1206 17:16:35.942] Namespace:    namespace-1544116594-13797
I1206 17:16:35.942] Selector:     app=guestbook,tier=frontend
I1206 17:16:35.942] Labels:       app=guestbook
I1206 17:16:35.942]               tier=frontend
I1206 17:16:35.942] Annotations:  <none>
I1206 17:16:35.942] Replicas:     3 current / 3 desired
I1206 17:16:35.942] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:16:35.942] Pod Template:
I1206 17:16:35.942]   Labels:  app=guestbook
I1206 17:16:35.942]            tier=frontend
I1206 17:16:35.942]   Containers:
I1206 17:16:35.942]    php-redis:
I1206 17:16:35.943]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1206 17:16:35.970] Namespace:    namespace-1544116594-13797
I1206 17:16:35.970] Selector:     app=guestbook,tier=frontend
I1206 17:16:35.970] Labels:       app=guestbook
I1206 17:16:35.970]               tier=frontend
I1206 17:16:35.970] Annotations:  <none>
I1206 17:16:35.970] Replicas:     3 current / 3 desired
I1206 17:16:35.970] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:16:35.970] Pod Template:
I1206 17:16:35.971]   Labels:  app=guestbook
I1206 17:16:35.971]            tier=frontend
I1206 17:16:35.971]   Containers:
I1206 17:16:35.971]    php-redis:
I1206 17:16:35.971]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1206 17:16:36.063] Namespace:    namespace-1544116594-13797
I1206 17:16:36.063] Selector:     app=guestbook,tier=frontend
I1206 17:16:36.063] Labels:       app=guestbook
I1206 17:16:36.063]               tier=frontend
I1206 17:16:36.063] Annotations:  <none>
I1206 17:16:36.063] Replicas:     3 current / 3 desired
I1206 17:16:36.063] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:16:36.063] Pod Template:
I1206 17:16:36.063]   Labels:  app=guestbook
I1206 17:16:36.063]            tier=frontend
I1206 17:16:36.063]   Containers:
I1206 17:16:36.063]    php-redis:
I1206 17:16:36.064]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1206 17:16:36.160] Namespace:    namespace-1544116594-13797
I1206 17:16:36.160] Selector:     app=guestbook,tier=frontend
I1206 17:16:36.160] Labels:       app=guestbook
I1206 17:16:36.160]               tier=frontend
I1206 17:16:36.160] Annotations:  <none>
I1206 17:16:36.160] Replicas:     3 current / 3 desired
I1206 17:16:36.160] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:16:36.160] Pod Template:
I1206 17:16:36.160]   Labels:  app=guestbook
I1206 17:16:36.160]            tier=frontend
I1206 17:16:36.160]   Containers:
I1206 17:16:36.160]    php-redis:
I1206 17:16:36.160]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I1206 17:16:36.902] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I1206 17:16:36.980] core.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I1206 17:16:37.062] replicationcontroller/frontend scaled
I1206 17:16:37.147] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I1206 17:16:37.223] replicationcontroller "frontend" deleted
W1206 17:16:37.324] I1206 17:16:36.326014   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"frontend", UID:"aefe9990-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-flhpk
W1206 17:16:37.324] error: Expected replicas to be 3, was 2
W1206 17:16:37.324] I1206 17:16:36.822755   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"frontend", UID:"aefe9990-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1384", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2gnwf
W1206 17:16:37.324] I1206 17:16:37.067287   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"frontend", UID:"aefe9990-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-2gnwf
W1206 17:16:37.366] I1206 17:16:37.365576   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"redis-master", UID:"b0477a57-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1400", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-z457z
I1206 17:16:37.466] replicationcontroller/redis-master created
I1206 17:16:37.506] replicationcontroller/redis-slave created
I1206 17:16:37.599] replicationcontroller/redis-master scaled
... skipping 6 lines ...
W1206 17:16:37.960] I1206 17:16:37.511313   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"redis-slave", UID:"b05d5fb5-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-sfrzl
W1206 17:16:37.960] I1206 17:16:37.602575   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"redis-master", UID:"b0477a57-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-c2gfm
W1206 17:16:37.960] I1206 17:16:37.604953   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"redis-master", UID:"b0477a57-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-8f5t9
W1206 17:16:37.961] I1206 17:16:37.605548   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"redis-master", UID:"b0477a57-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-jwc88
W1206 17:16:37.961] I1206 17:16:37.609979   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"redis-slave", UID:"b05d5fb5-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-hc4gc
W1206 17:16:37.961] I1206 17:16:37.612314   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"redis-slave", UID:"b05d5fb5-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-pwbn2
W1206 17:16:37.961] E1206 17:16:37.864349   55659 replica_set.go:450] Sync "namespace-1544116594-13797/redis-master" failed with Operation cannot be fulfilled on replicationcontrollers "redis-master": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1544116594-13797/redis-master, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: b0477a57-f97a-11e8-be0e-0242ac110002, UID in object meta: 
W1206 17:16:37.962] E1206 17:16:37.915945   55659 replica_set.go:450] Sync "namespace-1544116594-13797/redis-slave" failed with Operation cannot be fulfilled on replicationcontrollers "redis-slave": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1544116594-13797/redis-slave, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: b05d5fb5-f97a-11e8-be0e-0242ac110002, UID in object meta: 
W1206 17:16:38.018] I1206 17:16:38.017895   55659 event.go:221] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1544116594-13797", Name:"pi", UID:"b0aaf1b0-f97a-11e8-be0e-0242ac110002", APIVersion:"batch/v1", ResourceVersion:"1446", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-6r7p8
W1206 17:16:38.090] kubectl scale job is DEPRECATED and will be removed in a future version.
I1206 17:16:38.191] job.batch/pi created
I1206 17:16:38.191] job.batch/pi scaled
I1206 17:16:38.191] core.sh:1089: Successful get job pi {{.spec.parallelism}}: 2
I1206 17:16:38.255] job.batch "pi" deleted
... skipping 11 lines ...
I1206 17:16:38.988] service "expose-test-deployment" deleted
I1206 17:16:39.078] Successful
I1206 17:16:39.078] message:service/expose-test-deployment exposed
I1206 17:16:39.078] has:service/expose-test-deployment exposed
I1206 17:16:39.150] service "expose-test-deployment" deleted
I1206 17:16:39.232] Successful
I1206 17:16:39.233] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1206 17:16:39.233] See 'kubectl expose -h' for help and examples
I1206 17:16:39.233] has:invalid deployment: no selectors
I1206 17:16:39.312] Successful
I1206 17:16:39.312] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1206 17:16:39.312] See 'kubectl expose -h' for help and examples
I1206 17:16:39.312] has:invalid deployment: no selectors
W1206 17:16:39.413] I1206 17:16:38.400285   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment", UID:"b0e54afe-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1454", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-659fc6fb to 3
W1206 17:16:39.414] I1206 17:16:38.403822   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-659fc6fb", UID:"b0e5e3ed-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-xhvvx
W1206 17:16:39.414] I1206 17:16:38.406213   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-659fc6fb", UID:"b0e5e3ed-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-p47hb
W1206 17:16:39.414] I1206 17:16:38.406292   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-659fc6fb", UID:"b0e5e3ed-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-5l4rc
... skipping 27 lines ...
I1206 17:16:41.209] service "frontend" deleted
I1206 17:16:41.215] service "frontend-2" deleted
I1206 17:16:41.220] service "frontend-3" deleted
I1206 17:16:41.225] service "frontend-4" deleted
I1206 17:16:41.231] service "frontend-5" deleted
I1206 17:16:41.319] Successful
I1206 17:16:41.319] message:error: cannot expose a Node
I1206 17:16:41.320] has:cannot expose
I1206 17:16:41.402] Successful
I1206 17:16:41.403] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1206 17:16:41.403] has:metadata.name: Invalid value
I1206 17:16:41.487] Successful
I1206 17:16:41.488] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 33 lines ...
I1206 17:16:43.510] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1206 17:16:43.596] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1206 17:16:43.671] horizontalpodautoscaler.autoscaling "frontend" deleted
W1206 17:16:43.771] I1206 17:16:43.093641   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"frontend", UID:"b3b165f0-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8n6fn
W1206 17:16:43.772] I1206 17:16:43.096185   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"frontend", UID:"b3b165f0-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2b4zs
W1206 17:16:43.772] I1206 17:16:43.097120   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116594-13797", Name:"frontend", UID:"b3b165f0-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-chqmn
W1206 17:16:43.772] Error: required flag(s) "max" not set
W1206 17:16:43.772] 
W1206 17:16:43.772] 
W1206 17:16:43.772] Examples:
W1206 17:16:43.773]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1206 17:16:43.773]   kubectl autoscale deployment foo --min=2 --max=10
W1206 17:16:43.773]   
... skipping 54 lines ...
I1206 17:16:43.965]           limits:
I1206 17:16:43.965]             cpu: 300m
I1206 17:16:43.965]           requests:
I1206 17:16:43.965]             cpu: 300m
I1206 17:16:43.965]       terminationGracePeriodSeconds: 0
I1206 17:16:43.965] status: {}
W1206 17:16:44.066] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I1206 17:16:44.186] deployment.extensions/nginx-deployment-resources created
I1206 17:16:44.282] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1206 17:16:44.367] core.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 17:16:44.453] core.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1206 17:16:44.538] deployment.extensions/nginx-deployment-resources resource requirements updated
I1206 17:16:44.627] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 81 lines ...
W1206 17:16:45.570] I1206 17:16:44.196631   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-69c96fd869", UID:"b4595746-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1646", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-5ntcz
W1206 17:16:45.570] I1206 17:16:44.541545   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources", UID:"b458b7d1-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W1206 17:16:45.570] I1206 17:16:44.544685   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-6c5996c457", UID:"b48eefb2-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-7tnfb
W1206 17:16:45.571] I1206 17:16:44.547564   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources", UID:"b458b7d1-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W1206 17:16:45.571] I1206 17:16:44.552650   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources", UID:"b458b7d1-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1661", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 2
W1206 17:16:45.571] I1206 17:16:44.552702   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-69c96fd869", UID:"b4595746-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-66vcw
W1206 17:16:45.572] E1206 17:16:44.553991   55659 replica_set.go:450] Sync "namespace-1544116594-13797/nginx-deployment-resources-6c5996c457" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-resources-6c5996c457": the object has been modified; please apply your changes to the latest version and try again
W1206 17:16:45.572] I1206 17:16:44.556254   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-6c5996c457", UID:"b48eefb2-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-ps8m6
W1206 17:16:45.572] error: unable to find container named redis
W1206 17:16:45.572] I1206 17:16:44.881722   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources", UID:"b458b7d1-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1683", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 0
W1206 17:16:45.572] I1206 17:16:44.886549   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-69c96fd869", UID:"b4595746-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-bg62l
W1206 17:16:45.573] I1206 17:16:44.887303   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources", UID:"b458b7d1-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1686", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 2
W1206 17:16:45.573] I1206 17:16:44.887321   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-69c96fd869", UID:"b4595746-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-5ntcz
W1206 17:16:45.573] I1206 17:16:44.890325   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-5f4579485f", UID:"b4c20273-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-xrq69
W1206 17:16:45.573] I1206 17:16:44.892928   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-5f4579485f", UID:"b4c20273-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-f55gj
W1206 17:16:45.574] I1206 17:16:45.139730   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources", UID:"b458b7d1-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1709", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5f4579485f to 0
W1206 17:16:45.574] I1206 17:16:45.145493   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-5f4579485f", UID:"b4c20273-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1713", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5f4579485f-xrq69
W1206 17:16:45.574] I1206 17:16:45.145542   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-5f4579485f", UID:"b4c20273-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1713", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5f4579485f-f55gj
W1206 17:16:45.575] I1206 17:16:45.145650   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources", UID:"b458b7d1-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1711", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 2
W1206 17:16:45.575] I1206 17:16:45.147938   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-ff8d89cb6", UID:"b4e95b7d-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1718", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-qz7mk
W1206 17:16:45.575] I1206 17:16:45.244272   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116594-13797", Name:"nginx-deployment-resources-ff8d89cb6", UID:"b4e95b7d-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1718", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-4xd7x
W1206 17:16:45.575] error: you must specify resources by --filename when --local is set.
W1206 17:16:45.575] Example resource specifications include:
W1206 17:16:45.575]    '-f rsrc.yaml'
W1206 17:16:45.575]    '--filename=rsrc.json'
I1206 17:16:45.676] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1206 17:16:45.697] core.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1206 17:16:45.780] core.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I1206 17:16:47.231]                 pod-template-hash=55c9b846cc
I1206 17:16:47.231] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1206 17:16:47.231]                 deployment.kubernetes.io/max-replicas: 2
I1206 17:16:47.231]                 deployment.kubernetes.io/revision: 1
I1206 17:16:47.231] Controlled By:  Deployment/test-nginx-apps
I1206 17:16:47.231] Replicas:       1 current / 1 desired
I1206 17:16:47.231] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 17:16:47.231] Pod Template:
I1206 17:16:47.232]   Labels:  app=test-nginx-apps
I1206 17:16:47.232]            pod-template-hash=55c9b846cc
I1206 17:16:47.232]   Containers:
I1206 17:16:47.232]    nginx:
I1206 17:16:47.232]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 95 lines ...
W1206 17:16:51.461] I1206 17:16:50.978492   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-6f6bb85d9c", UID:"b810e95d-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1895", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-6f6bb85d9c-vc2cd
W1206 17:16:51.461] I1206 17:16:50.978743   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116605-12521", Name:"nginx", UID:"b8104a53-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1891", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 2
W1206 17:16:51.462] I1206 17:16:50.981027   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-9486b7cb7", UID:"b8635d3a-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1900", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-2n7jm
I1206 17:16:52.459] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 17:16:52.652] apps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 17:16:52.762] deployment.extensions/nginx rolled back
W1206 17:16:52.862] error: unable to find specified revision 1000000 in history
I1206 17:16:53.863] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1206 17:16:53.958] deployment.extensions/nginx paused
W1206 17:16:54.066] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I1206 17:16:54.167] deployment.extensions/nginx resumed
I1206 17:16:54.271] deployment.extensions/nginx rolled back
I1206 17:16:54.452]     deployment.kubernetes.io/revision-history: 1,3
W1206 17:16:54.638] error: desired revision (3) is different from the running revision (5)
I1206 17:16:54.792] deployment.extensions/nginx2 created
I1206 17:16:54.879] deployment.extensions "nginx2" deleted
I1206 17:16:54.962] deployment.extensions "nginx" deleted
I1206 17:16:55.056] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:16:55.205] deployment.extensions/nginx-deployment created
I1206 17:16:55.307] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 29 lines ...
I1206 17:16:56.953] apps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 17:16:57.117] apps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 17:16:57.200] apps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1206 17:16:57.272] deployment.extensions "nginx-deployment" deleted
I1206 17:16:57.361] apps.sh:366: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:16:57.496] deployment.extensions/nginx-deployment created
W1206 17:16:57.597] error: unable to find container named "redis"
W1206 17:16:57.597] I1206 17:16:56.782571   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment", UID:"bae9ef7a-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2011", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1206 17:16:57.597] I1206 17:16:56.787947   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment", UID:"bae9ef7a-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2013", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 2
W1206 17:16:57.598] I1206 17:16:56.788027   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-646d4f779d", UID:"baea9f9c-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-ppjm9
W1206 17:16:57.598] I1206 17:16:56.788089   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-646d4f779d", UID:"baea9f9c-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-p6qdw
W1206 17:16:57.598] I1206 17:16:56.791272   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-dc756cc6", UID:"bbd9fda8-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-wtnhj
W1206 17:16:57.599] I1206 17:16:56.793626   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-dc756cc6", UID:"bbd9fda8-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-tblzd
... skipping 54 lines ...
I1206 17:17:00.981] Namespace:    namespace-1544116619-1518
I1206 17:17:00.982] Selector:     app=guestbook,tier=frontend
I1206 17:17:00.982] Labels:       app=guestbook
I1206 17:17:00.982]               tier=frontend
I1206 17:17:00.982] Annotations:  <none>
I1206 17:17:00.982] Replicas:     3 current / 3 desired
I1206 17:17:00.982] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:00.982] Pod Template:
I1206 17:17:00.982]   Labels:  app=guestbook
I1206 17:17:00.982]            tier=frontend
I1206 17:17:00.982]   Containers:
I1206 17:17:00.982]    php-redis:
I1206 17:17:00.982]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1206 17:17:01.086] Namespace:    namespace-1544116619-1518
I1206 17:17:01.086] Selector:     app=guestbook,tier=frontend
I1206 17:17:01.086] Labels:       app=guestbook
I1206 17:17:01.086]               tier=frontend
I1206 17:17:01.086] Annotations:  <none>
I1206 17:17:01.086] Replicas:     3 current / 3 desired
I1206 17:17:01.086] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:01.086] Pod Template:
I1206 17:17:01.086]   Labels:  app=guestbook
I1206 17:17:01.087]            tier=frontend
I1206 17:17:01.087]   Containers:
I1206 17:17:01.087]    php-redis:
I1206 17:17:01.087]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1206 17:17:01.181] Namespace:    namespace-1544116619-1518
I1206 17:17:01.181] Selector:     app=guestbook,tier=frontend
I1206 17:17:01.181] Labels:       app=guestbook
I1206 17:17:01.181]               tier=frontend
I1206 17:17:01.181] Annotations:  <none>
I1206 17:17:01.181] Replicas:     3 current / 3 desired
I1206 17:17:01.181] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:01.181] Pod Template:
I1206 17:17:01.182]   Labels:  app=guestbook
I1206 17:17:01.182]            tier=frontend
I1206 17:17:01.182]   Containers:
I1206 17:17:01.182]    php-redis:
I1206 17:17:01.182]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I1206 17:17:01.282] Namespace:    namespace-1544116619-1518
I1206 17:17:01.282] Selector:     app=guestbook,tier=frontend
I1206 17:17:01.282] Labels:       app=guestbook
I1206 17:17:01.282]               tier=frontend
I1206 17:17:01.282] Annotations:  <none>
I1206 17:17:01.282] Replicas:     3 current / 3 desired
I1206 17:17:01.282] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:01.282] Pod Template:
I1206 17:17:01.282]   Labels:  app=guestbook
I1206 17:17:01.283]            tier=frontend
I1206 17:17:01.283]   Containers:
I1206 17:17:01.283]    php-redis:
I1206 17:17:01.283]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 13 lines ...
I1206 17:17:01.284]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-qm472
I1206 17:17:01.284]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-plv89
I1206 17:17:01.284] 
W1206 17:17:01.384] I1206 17:16:58.138347   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment", UID:"bc478ef8-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2066", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 1
W1206 17:17:01.385] I1206 17:16:58.141455   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-5b795689cd", UID:"bca9b3bc-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2067", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-p48cc
W1206 17:17:01.385] I1206 17:16:58.144707   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment", UID:"bc478ef8-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2066", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1206 17:17:01.386] E1206 17:16:58.149209   55659 replica_set.go:450] Sync "namespace-1544116605-12521/nginx-deployment-5b795689cd" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5b795689cd": the object has been modified; please apply your changes to the latest version and try again
W1206 17:17:01.386] I1206 17:16:58.149662   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-646d4f779d", UID:"bc4821cd-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-mq6c4
W1206 17:17:01.386] I1206 17:16:58.149837   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment", UID:"bc478ef8-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2069", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 2
W1206 17:17:01.387] I1206 17:16:58.153289   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-5b795689cd", UID:"bca9b3bc-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-57bxh
W1206 17:17:01.387] I1206 17:16:58.258038   55659 horizontal.go:309] Horizontal Pod Autoscaler frontend has been deleted in namespace-1544116594-13797
W1206 17:17:01.387] I1206 17:16:58.411412   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment", UID:"bc478ef8-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2090", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1206 17:17:01.387] I1206 17:16:58.414858   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-646d4f779d", UID:"bc4821cd-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2094", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-dcjfc
... skipping 8 lines ...
W1206 17:17:01.390] I1206 17:16:58.597206   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-794dcdf6bb", UID:"bced2c77-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2124", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-794dcdf6bb-pqb77
W1206 17:17:01.391] I1206 17:16:58.642701   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-794dcdf6bb", UID:"bced2c77-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2124", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-794dcdf6bb-6rqtc
W1206 17:17:01.391] I1206 17:16:58.683309   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment", UID:"bc478ef8-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2135", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5b795689cd to 0
W1206 17:17:01.391] I1206 17:16:58.740049   55659 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment", UID:"bc478ef8-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2137", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-65b869c68c to 2
W1206 17:17:01.391] I1206 17:16:58.894768   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-5b795689cd", UID:"bca9b3bc-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2138", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-p48cc
W1206 17:17:01.392] I1206 17:16:58.943736   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116605-12521", Name:"nginx-deployment-5b795689cd", UID:"bca9b3bc-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2138", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-57bxh
W1206 17:17:01.392] E1206 17:16:59.140980   55659 replica_set.go:450] Sync "namespace-1544116605-12521/nginx-deployment-65b869c68c" failed with replicasets.apps "nginx-deployment-65b869c68c" not found
W1206 17:17:01.392] E1206 17:16:59.291190   55659 replica_set.go:450] Sync "namespace-1544116605-12521/nginx-deployment-5766b7c95b" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5766b7c95b": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544116605-12521/nginx-deployment-5766b7c95b, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: bcd263a6-f97a-11e8-be0e-0242ac110002, UID in object meta: 
W1206 17:17:01.393] E1206 17:16:59.390848   55659 replica_set.go:450] Sync "namespace-1544116605-12521/nginx-deployment-794dcdf6bb" failed with replicasets.apps "nginx-deployment-794dcdf6bb" not found
W1206 17:17:01.393] E1206 17:16:59.440772   55659 replica_set.go:450] Sync "namespace-1544116605-12521/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
W1206 17:17:01.393] E1206 17:16:59.490832   55659 replica_set.go:450] Sync "namespace-1544116605-12521/nginx-deployment-5b795689cd" failed with replicasets.apps "nginx-deployment-5b795689cd" not found
W1206 17:17:01.393] I1206 17:16:59.620230   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend", UID:"bd8afe35-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2173", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nf8m4
W1206 17:17:01.394] I1206 17:16:59.692104   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend", UID:"bd8afe35-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2173", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jp5qx
W1206 17:17:01.394] I1206 17:16:59.742296   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend", UID:"bd8afe35-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2173", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-c7xfz
W1206 17:17:01.394] E1206 17:16:59.940735   55659 replica_set.go:450] Sync "namespace-1544116619-1518/frontend" failed with replicasets.apps "frontend" not found
W1206 17:17:01.394] I1206 17:17:00.022437   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend-no-cascade", UID:"bdc88955-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2187", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-7b6wk
W1206 17:17:01.395] I1206 17:17:00.041945   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend-no-cascade", UID:"bdc88955-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2187", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-rvbxq
W1206 17:17:01.395] I1206 17:17:00.092468   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend-no-cascade", UID:"bdc88955-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2187", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-b4rsk
W1206 17:17:01.395] E1206 17:17:00.390934   55659 replica_set.go:450] Sync "namespace-1544116619-1518/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W1206 17:17:01.396] I1206 17:17:00.768797   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend", UID:"be3a7a19-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2207", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8t9ss
W1206 17:17:01.396] I1206 17:17:00.771262   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend", UID:"be3a7a19-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2207", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qm472
W1206 17:17:01.396] I1206 17:17:00.771305   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend", UID:"be3a7a19-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2207", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-plv89
I1206 17:17:01.497] Successful describe rs:
I1206 17:17:01.497] Name:         frontend
I1206 17:17:01.497] Namespace:    namespace-1544116619-1518
I1206 17:17:01.497] Selector:     app=guestbook,tier=frontend
I1206 17:17:01.497] Labels:       app=guestbook
I1206 17:17:01.497]               tier=frontend
I1206 17:17:01.497] Annotations:  <none>
I1206 17:17:01.498] Replicas:     3 current / 3 desired
I1206 17:17:01.498] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:01.498] Pod Template:
I1206 17:17:01.498]   Labels:  app=guestbook
I1206 17:17:01.498]            tier=frontend
I1206 17:17:01.498]   Containers:
I1206 17:17:01.498]    php-redis:
I1206 17:17:01.498]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1206 17:17:01.519] Namespace:    namespace-1544116619-1518
I1206 17:17:01.519] Selector:     app=guestbook,tier=frontend
I1206 17:17:01.519] Labels:       app=guestbook
I1206 17:17:01.519]               tier=frontend
I1206 17:17:01.519] Annotations:  <none>
I1206 17:17:01.519] Replicas:     3 current / 3 desired
I1206 17:17:01.519] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:01.519] Pod Template:
I1206 17:17:01.519]   Labels:  app=guestbook
I1206 17:17:01.519]            tier=frontend
I1206 17:17:01.519]   Containers:
I1206 17:17:01.519]    php-redis:
I1206 17:17:01.519]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1206 17:17:01.616] Namespace:    namespace-1544116619-1518
I1206 17:17:01.616] Selector:     app=guestbook,tier=frontend
I1206 17:17:01.617] Labels:       app=guestbook
I1206 17:17:01.617]               tier=frontend
I1206 17:17:01.617] Annotations:  <none>
I1206 17:17:01.617] Replicas:     3 current / 3 desired
I1206 17:17:01.617] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:01.617] Pod Template:
I1206 17:17:01.617]   Labels:  app=guestbook
I1206 17:17:01.617]            tier=frontend
I1206 17:17:01.617]   Containers:
I1206 17:17:01.617]    php-redis:
I1206 17:17:01.618]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1206 17:17:01.719] Namespace:    namespace-1544116619-1518
I1206 17:17:01.719] Selector:     app=guestbook,tier=frontend
I1206 17:17:01.719] Labels:       app=guestbook
I1206 17:17:01.719]               tier=frontend
I1206 17:17:01.719] Annotations:  <none>
I1206 17:17:01.719] Replicas:     3 current / 3 desired
I1206 17:17:01.719] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:01.720] Pod Template:
I1206 17:17:01.720]   Labels:  app=guestbook
I1206 17:17:01.720]            tier=frontend
I1206 17:17:01.720]   Containers:
I1206 17:17:01.720]    php-redis:
I1206 17:17:01.720]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I1206 17:17:06.443] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1206 17:17:06.529] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1206 17:17:06.602] horizontalpodautoscaler.autoscaling "frontend" deleted
W1206 17:17:06.703] I1206 17:17:06.037298   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend", UID:"c15e6b69-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4mnbw
W1206 17:17:06.703] I1206 17:17:06.039574   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend", UID:"c15e6b69-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-prh7r
W1206 17:17:06.703] I1206 17:17:06.039689   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544116619-1518", Name:"frontend", UID:"c15e6b69-f97a-11e8-be0e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-29kkd
W1206 17:17:06.703] Error: required flag(s) "max" not set
W1206 17:17:06.703] 
W1206 17:17:06.703] 
W1206 17:17:06.703] Examples:
W1206 17:17:06.704]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1206 17:17:06.704]   kubectl autoscale deployment foo --min=2 --max=10
W1206 17:17:06.704]   
... skipping 85 lines ...
I1206 17:17:09.352] apps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1206 17:17:09.439] apps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1206 17:17:09.535] statefulset.apps/nginx rolled back
I1206 17:17:09.620] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1206 17:17:09.704] apps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1206 17:17:09.800] Successful
I1206 17:17:09.801] message:error: unable to find specified revision 1000000 in history
I1206 17:17:09.801] has:unable to find specified revision
I1206 17:17:09.885] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1206 17:17:09.968] apps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1206 17:17:10.062] statefulset.apps/nginx rolled back
I1206 17:17:10.156] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I1206 17:17:10.246] apps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 61 lines ...
I1206 17:17:11.906] Name:         mock
I1206 17:17:11.906] Namespace:    namespace-1544116631-20823
I1206 17:17:11.906] Selector:     app=mock
I1206 17:17:11.906] Labels:       app=mock
I1206 17:17:11.906] Annotations:  <none>
I1206 17:17:11.906] Replicas:     1 current / 1 desired
I1206 17:17:11.906] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:11.906] Pod Template:
I1206 17:17:11.906]   Labels:  app=mock
I1206 17:17:11.906]   Containers:
I1206 17:17:11.906]    mock-container:
I1206 17:17:11.906]     Image:        k8s.gcr.io/pause:2.0
I1206 17:17:11.907]     Port:         9949/TCP
... skipping 56 lines ...
I1206 17:17:13.884] Name:         mock
I1206 17:17:13.884] Namespace:    namespace-1544116631-20823
I1206 17:17:13.884] Selector:     app=mock
I1206 17:17:13.884] Labels:       app=mock
I1206 17:17:13.884] Annotations:  <none>
I1206 17:17:13.884] Replicas:     1 current / 1 desired
I1206 17:17:13.884] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:13.884] Pod Template:
I1206 17:17:13.884]   Labels:  app=mock
I1206 17:17:13.884]   Containers:
I1206 17:17:13.885]    mock-container:
I1206 17:17:13.885]     Image:        k8s.gcr.io/pause:2.0
I1206 17:17:13.885]     Port:         9949/TCP
... skipping 56 lines ...
I1206 17:17:15.858] Name:         mock
I1206 17:17:15.858] Namespace:    namespace-1544116631-20823
I1206 17:17:15.858] Selector:     app=mock
I1206 17:17:15.858] Labels:       app=mock
I1206 17:17:15.858] Annotations:  <none>
I1206 17:17:15.858] Replicas:     1 current / 1 desired
I1206 17:17:15.858] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:15.858] Pod Template:
I1206 17:17:15.859]   Labels:  app=mock
I1206 17:17:15.859]   Containers:
I1206 17:17:15.859]    mock-container:
I1206 17:17:15.859]     Image:        k8s.gcr.io/pause:2.0
I1206 17:17:15.859]     Port:         9949/TCP
... skipping 42 lines ...
I1206 17:17:17.752] Namespace:    namespace-1544116631-20823
I1206 17:17:17.752] Selector:     app=mock
I1206 17:17:17.752] Labels:       app=mock
I1206 17:17:17.752]               status=replaced
I1206 17:17:17.752] Annotations:  <none>
I1206 17:17:17.753] Replicas:     1 current / 1 desired
I1206 17:17:17.753] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:17.753] Pod Template:
I1206 17:17:17.753]   Labels:  app=mock
I1206 17:17:17.753]   Containers:
I1206 17:17:17.753]    mock-container:
I1206 17:17:17.753]     Image:        k8s.gcr.io/pause:2.0
I1206 17:17:17.753]     Port:         9949/TCP
... skipping 11 lines ...
I1206 17:17:17.755] Namespace:    namespace-1544116631-20823
I1206 17:17:17.755] Selector:     app=mock2
I1206 17:17:17.755] Labels:       app=mock2
I1206 17:17:17.755]               status=replaced
I1206 17:17:17.755] Annotations:  <none>
I1206 17:17:17.755] Replicas:     1 current / 1 desired
I1206 17:17:17.755] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1206 17:17:17.755] Pod Template:
I1206 17:17:17.755]   Labels:  app=mock2
I1206 17:17:17.755]   Containers:
I1206 17:17:17.755]    mock-container:
I1206 17:17:17.756]     Image:        k8s.gcr.io/pause:2.0
I1206 17:17:17.756]     Port:         9949/TCP
... skipping 107 lines ...
I1206 17:17:22.281] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1206 17:17:22.421] persistentvolume/pv0001 created
I1206 17:17:22.510] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I1206 17:17:22.581] persistentvolume "pv0001" deleted
W1206 17:17:22.682] I1206 17:17:21.196806   55659 horizontal.go:309] Horizontal Pod Autoscaler frontend has been deleted in namespace-1544116619-1518
W1206 17:17:22.683] I1206 17:17:21.418644   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116631-20823", Name:"mock", UID:"ca89a6da-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"2664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-7wh5k
W1206 17:17:22.683] E1206 17:17:22.427037   55659 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I1206 17:17:22.784] persistentvolume/pv0002 created
I1206 17:17:22.819] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I1206 17:17:22.890] persistentvolume "pv0002" deleted
I1206 17:17:23.042] persistentvolume/pv0003 created
I1206 17:17:23.130] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I1206 17:17:23.203] persistentvolume "pv0003" deleted
... skipping 478 lines ...
I1206 17:17:27.635] yes
I1206 17:17:27.635] has:the server doesn't have a resource type
I1206 17:17:27.704] Successful
I1206 17:17:27.704] message:yes
I1206 17:17:27.704] has:yes
I1206 17:17:27.771] Successful
I1206 17:17:27.771] message:error: --subresource can not be used with NonResourceURL
I1206 17:17:27.771] has:subresource can not be used with NonResourceURL
I1206 17:17:27.841] Successful
I1206 17:17:27.916] Successful
I1206 17:17:27.916] message:yes
I1206 17:17:27.916] 0
I1206 17:17:27.916] has:0
... skipping 6 lines ...
I1206 17:17:28.088] role.rbac.authorization.k8s.io/testing-R reconciled
I1206 17:17:28.172] legacy-script.sh:736: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I1206 17:17:28.253] legacy-script.sh:737: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I1206 17:17:28.335] legacy-script.sh:738: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I1206 17:17:28.417] legacy-script.sh:739: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I1206 17:17:28.493] Successful
I1206 17:17:28.494] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I1206 17:17:28.494] has:only rbac.authorization.k8s.io/v1 is supported
I1206 17:17:28.575] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I1206 17:17:28.580] role.rbac.authorization.k8s.io "testing-R" deleted
I1206 17:17:28.589] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I1206 17:17:28.596] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I1206 17:17:28.605] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I1206 17:17:29.615] +++ Running case: test-cmd.run_kubectl_explain_tests 
I1206 17:17:29.617] +++ working dir: /go/src/k8s.io/kubernetes
I1206 17:17:29.619] +++ command: run_kubectl_explain_tests
I1206 17:17:29.627] +++ [1206 17:17:29] Testing kubectl(v1:explain)
W1206 17:17:29.728] I1206 17:17:29.509819   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116648-14750", Name:"cassandra", UID:"cf23b7bd-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"2744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-rs4nm
W1206 17:17:29.728] I1206 17:17:29.516894   55659 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544116648-14750", Name:"cassandra", UID:"cf23b7bd-f97a-11e8-be0e-0242ac110002", APIVersion:"v1", ResourceVersion:"2754", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-5tjwg
W1206 17:17:29.728] E1206 17:17:29.522013   55659 replica_set.go:450] Sync "namespace-1544116648-14750/cassandra" failed with replicationcontrollers "cassandra" not found
I1206 17:17:29.829] KIND:     Pod
I1206 17:17:29.829] VERSION:  v1
I1206 17:17:29.829] 
I1206 17:17:29.829] DESCRIPTION:
I1206 17:17:29.830]      Pod is a collection of containers that can run on a host. This resource is
I1206 17:17:29.830]      created by clients and scheduled onto hosts.
... skipping 849 lines ...
I1206 17:17:53.889] message:node/127.0.0.1 already uncordoned (dry run)
I1206 17:17:53.889] has:already uncordoned
I1206 17:17:53.970] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I1206 17:17:54.043] node/127.0.0.1 labeled
I1206 17:17:54.126] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I1206 17:17:54.187] Successful
I1206 17:17:54.187] message:error: cannot specify both a node name and a --selector option
I1206 17:17:54.187] See 'kubectl drain -h' for help and examples
I1206 17:17:54.187] has:cannot specify both a node name
I1206 17:17:54.249] Successful
I1206 17:17:54.249] message:error: USAGE: cordon NODE [flags]
I1206 17:17:54.249] See 'kubectl cordon -h' for help and examples
I1206 17:17:54.249] has:error\: USAGE\: cordon NODE
I1206 17:17:54.319] node/127.0.0.1 already uncordoned
I1206 17:17:54.385] Successful
I1206 17:17:54.385] message:error: You must provide one or more resources by argument or filename.
I1206 17:17:54.385] Example resource specifications include:
I1206 17:17:54.385]    '-f rsrc.yaml'
I1206 17:17:54.385]    '--filename=rsrc.json'
I1206 17:17:54.386]    '<resource> <name>'
I1206 17:17:54.386]    '<resource>'
I1206 17:17:54.386] has:must provide one or more resources
... skipping 15 lines ...
I1206 17:17:54.776] Successful
I1206 17:17:54.776] message:The following kubectl-compatible plugins are available:
I1206 17:17:54.776] 
I1206 17:17:54.776] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I1206 17:17:54.776]   - warning: kubectl-version overwrites existing command: "kubectl version"
I1206 17:17:54.776] 
I1206 17:17:54.777] error: one plugin warning was found
I1206 17:17:54.777] has:kubectl-version overwrites existing command: "kubectl version"
I1206 17:17:54.843] Successful
I1206 17:17:54.843] message:The following kubectl-compatible plugins are available:
I1206 17:17:54.843] 
I1206 17:17:54.843] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1206 17:17:54.844] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I1206 17:17:54.844]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1206 17:17:54.844] 
I1206 17:17:54.844] error: one plugin warning was found
I1206 17:17:54.844] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I1206 17:17:54.917] Successful
I1206 17:17:54.917] message:The following kubectl-compatible plugins are available:
I1206 17:17:54.917] 
I1206 17:17:54.917] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1206 17:17:54.917] has:plugins are available
I1206 17:17:54.983] Successful
I1206 17:17:54.984] message:
I1206 17:17:54.984] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I1206 17:17:54.984] error: unable to find any kubectl plugins in your PATH
I1206 17:17:54.984] has:unable to find any kubectl plugins in your PATH
I1206 17:17:55.048] Successful
I1206 17:17:55.049] message:I am plugin foo
I1206 17:17:55.049] has:plugin foo
I1206 17:17:55.114] Successful
I1206 17:17:55.115] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.885+b5615259e5b1b4", GitCommit:"b5615259e5b1b4548d863f3140aadb58c85c6865", GitTreeState:"clean", BuildDate:"2018-12-06T17:11:36Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I1206 17:17:55.182] 
I1206 17:17:55.184] +++ Running case: test-cmd.run_impersonation_tests 
I1206 17:17:55.186] +++ working dir: /go/src/k8s.io/kubernetes
I1206 17:17:55.189] +++ command: run_impersonation_tests
I1206 17:17:55.197] +++ [1206 17:17:55] Testing impersonation
I1206 17:17:55.260] Successful
I1206 17:17:55.260] message:error: requesting groups or user-extra for  without impersonating a user
I1206 17:17:55.261] has:without impersonating a user
I1206 17:17:55.407] certificatesigningrequest.certificates.k8s.io/foo created
I1206 17:17:55.492] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I1206 17:17:55.574] authorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I1206 17:17:55.646] certificatesigningrequest.certificates.k8s.io "foo" deleted
I1206 17:17:55.793] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 135 lines ...
W1206 17:17:56.265] I1206 17:17:56.253345   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.265] I1206 17:17:56.253352   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.265] I1206 17:17:56.253359   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.265] I1206 17:17:56.253371   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.265] I1206 17:17:56.253378   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.266] I1206 17:17:56.253382   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.266] W1206 17:17:56.253395   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.266] I1206 17:17:56.253412   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.266] I1206 17:17:56.253400   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.266] I1206 17:17:56.253420   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.266] I1206 17:17:56.253446   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.266] I1206 17:17:56.253452   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.267] W1206 17:17:56.253451   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.267] W1206 17:17:56.253466   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.267] I1206 17:17:56.253487   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.267] W1206 17:17:56.253488   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.267] I1206 17:17:56.253493   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.267] I1206 17:17:56.253527   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.267] W1206 17:17:56.253532   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.268] I1206 17:17:56.253540   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.268] W1206 17:17:56.253530   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.268] W1206 17:17:56.253563   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.268] W1206 17:17:56.253563   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.268] W1206 17:17:56.253601   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.268] W1206 17:17:56.253620   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.269] W1206 17:17:56.253627   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.269] W1206 17:17:56.253636   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.269] W1206 17:17:56.253656   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.269] W1206 17:17:56.253706   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.269] W1206 17:17:56.253720   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.270] W1206 17:17:56.253744   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.270] W1206 17:17:56.253760   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.270] W1206 17:17:56.253659   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.270] W1206 17:17:56.253798   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.270] W1206 17:17:56.253839   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.271] W1206 17:17:56.253843   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.271] W1206 17:17:56.253879   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.271] W1206 17:17:56.253911   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.271] W1206 17:17:56.253914   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.271] W1206 17:17:56.253940   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.272] W1206 17:17:56.253956   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.272] W1206 17:17:56.253971   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.272] W1206 17:17:56.253994   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.272] W1206 17:17:56.254002   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.272] W1206 17:17:56.254028   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.273] W1206 17:17:56.254040   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.273] W1206 17:17:56.254044   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.273] W1206 17:17:56.254004   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.273] W1206 17:17:56.254075   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.273] W1206 17:17:56.254089   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.273] W1206 17:17:56.254105   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.274] W1206 17:17:56.254116   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.274] W1206 17:17:56.254121   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.274] W1206 17:17:56.254133   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.274] W1206 17:17:56.253598   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.274] W1206 17:17:56.254155   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.275] W1206 17:17:56.254163   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.275] W1206 17:17:56.254178   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.275] W1206 17:17:56.254196   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.275] W1206 17:17:56.254209   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.275] W1206 17:17:56.254222   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.276] W1206 17:17:56.254252   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.276] W1206 17:17:56.254263   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.276] W1206 17:17:56.254288   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.276] W1206 17:17:56.254305   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.276] W1206 17:17:56.254305   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.277] W1206 17:17:56.254321   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.277] W1206 17:17:56.254252   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.277] W1206 17:17:56.254352   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.277] W1206 17:17:56.254357   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.277] W1206 17:17:56.254375   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.277] W1206 17:17:56.254401   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.278] W1206 17:17:56.254415   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.278] W1206 17:17:56.254425   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.278] W1206 17:17:56.254442   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.278] W1206 17:17:56.254464   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.278] W1206 17:17:56.254480   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.279] W1206 17:17:56.254503   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.279] W1206 17:17:56.254566   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.279] I1206 17:17:56.254574   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.279] I1206 17:17:56.254588   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.279] W1206 17:17:56.254612   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.279] W1206 17:17:56.254623   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.280] W1206 17:17:56.254630   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.280] I1206 17:17:56.254661   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.280] I1206 17:17:56.254704   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.280] W1206 17:17:56.254850   52307 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1206 17:17:56.280] E1206 17:17:56.256568   52307 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W1206 17:17:56.280] I1206 17:17:56.256792   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W1206 17:17:56.280] I1206 17:17:56.256820   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.280] W1206 17:17:56.256861   52307 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W1206 17:17:56.281] I1206 17:17:56.256904   52307 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1206 17:17:56.290] + make test-integration
I1206 17:17:56.391] No resources found
... skipping 14 lines ...
I1206 17:21:35.405] ok  	k8s.io/kubernetes/test/integration/apimachinery	168.872s
I1206 17:21:35.405] ok  	k8s.io/kubernetes/test/integration/apiserver	37.411s
I1206 17:21:35.405] [restful] 2018/12/06 17:20:27 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:41759/swaggerapi
I1206 17:21:35.405] [restful] 2018/12/06 17:20:27 log.go:33: [restful/swagger] https://127.0.0.1:41759/swaggerui/ is mapped to folder /swagger-ui/
I1206 17:21:35.406] [restful] 2018/12/06 17:20:30 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:41759/swaggerapi
I1206 17:21:35.406] [restful] 2018/12/06 17:20:30 log.go:33: [restful/swagger] https://127.0.0.1:41759/swaggerui/ is mapped to folder /swagger-ui/
I1206 17:21:35.406] FAIL	k8s.io/kubernetes/test/integration/auth	95.662s
I1206 17:21:35.406] [restful] 2018/12/06 17:19:21 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:38381/swaggerapi
I1206 17:21:35.406] [restful] 2018/12/06 17:19:21 log.go:33: [restful/swagger] https://127.0.0.1:38381/swaggerui/ is mapped to folder /swagger-ui/
I1206 17:21:35.406] [restful] 2018/12/06 17:19:23 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:38381/swaggerapi
I1206 17:21:35.407] [restful] 2018/12/06 17:19:23 log.go:33: [restful/swagger] https://127.0.0.1:38381/swaggerui/ is mapped to folder /swagger-ui/
I1206 17:21:35.407] [restful] 2018/12/06 17:19:31 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:35025/swaggerapi
I1206 17:21:35.407] [restful] 2018/12/06 17:19:31 log.go:33: [restful/swagger] https://127.0.0.1:35025/swaggerui/ is mapped to folder /swagger-ui/
... skipping 224 lines ...
I1206 17:30:40.229] [restful] 2018/12/06 17:23:37 log.go:33: [restful/swagger] https://127.0.0.1:46031/swaggerui/ is mapped to folder /swagger-ui/
I1206 17:30:40.229] ok  	k8s.io/kubernetes/test/integration/tls	13.918s
I1206 17:30:40.229] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.808s
I1206 17:30:40.230] ok  	k8s.io/kubernetes/test/integration/volume	90.961s
I1206 17:30:40.230] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	140.952s
I1206 17:30:41.651] +++ [1206 17:30:41] Saved JUnit XML test report to /workspace/artifacts/junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181206-171804.xml
I1206 17:30:41.654] Makefile:184: recipe for target 'test' failed
I1206 17:30:41.662] +++ [1206 17:30:41] Cleaning up etcd
W1206 17:30:41.763] make[1]: *** [test] Error 1
W1206 17:30:41.763] !!! [1206 17:30:41] Call tree:
W1206 17:30:41.763] !!! [1206 17:30:41]  1: hack/make-rules/test-integration.sh:105 runTests(...)
W1206 17:30:41.845] make: *** [test-integration] Error 1
I1206 17:30:41.946] +++ [1206 17:30:41] Integration test cleanup complete
I1206 17:30:41.946] Makefile:203: recipe for target 'test-integration' failed
W1206 17:30:43.094] Traceback (most recent call last):
W1206 17:30:43.094]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 167, in <module>
W1206 17:30:43.094]     main(ARGS.branch, ARGS.script, ARGS.force, ARGS.prow)
W1206 17:30:43.094]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 136, in main
W1206 17:30:43.094]     check(*cmd)
W1206 17:30:43.095]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W1206 17:30:43.095]     subprocess.check_call(cmd)
W1206 17:30:43.095]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1206 17:30:43.109]     raise CalledProcessError(retcode, cmd)
W1206 17:30:43.110] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181105-ceed87206', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E1206 17:30:43.115] Command failed
I1206 17:30:43.116] process 508 exited with code 1 after 24.5m
E1206 17:30:43.116] FAIL: ci-kubernetes-integration-master
I1206 17:30:43.117] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1206 17:30:43.641] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1206 17:30:43.690] process 123862 exited with code 0 after 0.0m
I1206 17:30:43.690] Call:  gcloud config get-value account
I1206 17:30:43.950] process 123875 exited with code 0 after 0.0m
I1206 17:30:43.950] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1206 17:30:43.950] Upload result and artifacts...
I1206 17:30:43.950] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/7160
I1206 17:30:43.951] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7160/artifacts
W1206 17:30:45.567] CommandException: One or more URLs matched no objects.
E1206 17:30:45.733] Command failed
I1206 17:30:45.733] process 123888 exited with code 1 after 0.0m
W1206 17:30:45.734] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7160/artifacts not exist yet
I1206 17:30:45.734] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7160/artifacts
I1206 17:30:49.463] process 124033 exited with code 0 after 0.1m
W1206 17:30:49.463] metadata path /workspace/_artifacts/metadata.json does not exist
W1206 17:30:49.463] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...