PRSidakM: [Need-Feedback] Support Volume Expansion Through StatefulSets
ResultFAILURE
Tests 1 failed / 578 succeeded
Started2018-12-07 05:24
Elapsed28m39s
Versionv1.14.0-alpha.0.904+6cb6e6e5babdc9
Buildergke-prow-default-pool-3c8994a8-93tb
Refs master:b7030166
71384:77ec8cda
pod4fb8fa1a-f9e0-11e8-8e0e-0a580a6c001a
infra-commitd6f7bb8bf
pod4fb8fa1a-f9e0-11e8-8e0e-0a580a6c001a
repok8s.io/kubernetes
repo-commit6cb6e6e5babdc9f4e0f4bcad04896f3d15397e08
repos{u'k8s.io/kubernetes': u'master:b7030166144bf2f4fd4a71514b4f3e04273270ce,71384:77ec8cda3dd5ec7224759ad74a9b4b2a26eb26bc'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver Test202StatusCode 3.69s

go test -v k8s.io/kubernetes/test/integration/apiserver -run Test202StatusCode$
I1207 05:40:53.556955  114211 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I1207 05:40:53.557018  114211 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I1207 05:40:53.557044  114211 master.go:228] Using reconciler: 
I1207 05:40:53.559226  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.559271  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.559349  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.559509  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.569059  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.573634  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.573673  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.573736  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.573862  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.579794  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.579828  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.579912  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.580012  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.580221  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.584596  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.585713  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.585740  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.585797  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.585964  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.586951  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.589894  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.589915  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.589968  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.590050  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.625273  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.625312  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.625360  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.625467  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.625812  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.627872  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.627901  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.627936  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.628019  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.628358  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.629154  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.629170  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.629217  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.629289  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.629502  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.630498  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.630514  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.630543  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.630607  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.635296  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.636624  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.636667  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.636709  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.636818  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.637038  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.637785  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.637814  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.637864  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.638001  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.638234  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.639120  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.639146  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.639176  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.639291  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.639514  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.640417  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.640442  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.640470  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.640537  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.640684  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.644231  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.644279  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.644325  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.644457  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.644673  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.645502  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.645555  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.645597  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.645710  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.646064  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.647539  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.647645  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.647666  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.647706  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.647826  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.649019  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.649445  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.649489  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.649707  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.650181  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.650938  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.675574  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.675740  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.675818  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.675921  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.677215  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.679333  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.679434  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.679527  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.679771  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.680478  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.680720  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.680746  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.680781  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.680865  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.681571  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.681588  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.681619  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.681735  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.681952  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.682777  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.682792  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.682822  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.682902  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.683159  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.683945  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.683961  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.683994  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.684058  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.684276  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.684889  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.690767  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.690799  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.690869  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.690940  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.691960  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.692628  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.692180  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.692765  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.693311  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.694889  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.695230  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.695313  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.694966  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.695422  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.696191  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.696564  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.696586  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.696621  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.697038  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.698559  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.699121  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.699137  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.699168  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.699488  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.699788  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.700327  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.700347  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.700377  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.700435  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.700766  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.701367  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.701396  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.701433  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.701490  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.717294  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.717326  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.717415  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.717583  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.718039  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.719003  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.719020  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.719089  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.719179  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.719247  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.719914  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.720214  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.720234  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.720751  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.720816  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.721528  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.721773  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.721786  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.721813  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.721888  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.722755  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.723078  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.723146  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.723260  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.723411  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.724184  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.725020  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.725049  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.725097  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.725165  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.738766  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.739361  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.739383  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.739418  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.739622  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.740275  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.740582  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.740600  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.740634  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.740682  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.741272  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.741288  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.741321  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.741437  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.741607  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.742364  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.742383  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.742413  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.742512  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.742730  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.743166  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.743179  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.743220  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.743289  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.743476  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.744337  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.744352  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.744400  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.744474  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.744671  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.748742  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.750666  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.756846  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.761886  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.761982  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.762925  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.762959  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.762971  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.763004  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.763044  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.763734  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.763754  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.763795  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.763936  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.764145  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.764879  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.764894  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.764924  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.765076  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.765268  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.766054  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.766069  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.766099  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.766163  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.766398  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.768004  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.768330  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.768343  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.768370  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.768598  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.771084  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.771454  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.771483  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.771543  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.771795  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.772300  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.772646  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.772666  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.772696  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.772811  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.773107  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.773460  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.773480  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.773510  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.773914  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.774212  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.774483  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.774498  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.774526  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.774811  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.776754  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.776778  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.776807  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.776923  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.777014  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.777644  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.777665  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.777693  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.777863  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.778085  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.778702  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.778728  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.778762  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.778858  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.779001  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.779616  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.779636  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.779686  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.779723  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.779863  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.780110  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.780459  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.780471  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.780498  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.780689  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.781356  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.781379  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.781404  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.781464  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.781655  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.782177  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.782208  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.782238  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.782321  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.782482  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.783015  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.783036  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.783060  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.783117  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.783298  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.792493  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.792585  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.792625  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.792767  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.792979  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.793382  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.794947  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.795321  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.795392  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.795443  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.796154  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:53.796176  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:53.796218  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:53.796297  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.796456  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:53.800012  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:40:53.805154  114211 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W1207 05:40:53.843167  114211 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1207 05:40:53.843865  114211 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1207 05:40:53.846210  114211 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1207 05:40:53.862736  114211 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I1207 05:40:54.557209  114211 clientconn.go:551] parsed scheme: ""
I1207 05:40:54.557255  114211 clientconn.go:557] scheme "" not registered, fallback to default scheme
I1207 05:40:54.557324  114211 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I1207 05:40:54.557390  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:54.558036  114211 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1207 05:40:54.870137  114211 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I1207 05:40:54.875859  114211 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I1207 05:40:54.875884  114211 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I1207 05:40:54.885134  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1207 05:40:54.888509  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1207 05:40:54.891221  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1207 05:40:54.893584  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I1207 05:40:54.904369  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I1207 05:40:54.909021  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I1207 05:40:54.911949  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1207 05:40:54.915077  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1207 05:40:54.918014  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1207 05:40:54.920140  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1207 05:40:54.923578  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I1207 05:40:54.926190  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1207 05:40:54.928901  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1207 05:40:54.933674  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1207 05:40:54.936020  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1207 05:40:54.938569  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1207 05:40:54.941091  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1207 05:40:54.943932  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 05:40:54.951541  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1207 05:40:54.956104  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1207 05:40:54.958759  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1207 05:40:54.961298  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1207 05:40:54.964005  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 05:40:54.966686  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1207 05:40:54.969136  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1207 05:40:54.971823  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1207 05:40:54.974622  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1207 05:40:54.977243  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 05:40:54.979682  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 05:40:54.982028  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 05:40:54.984536  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 05:40:54.987099  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 05:40:54.989855  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 05:40:54.992565  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 05:40:54.995621  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 05:40:54.998487  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 05:40:55.001711  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 05:40:55.004403  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1207 05:40:55.007262  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1207 05:40:55.009987  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1207 05:40:55.012696  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 05:40:55.014947  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 05:40:55.017313  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 05:40:55.019776  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 05:40:55.022530  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 05:40:55.025166  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1207 05:40:55.027968  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 05:40:55.030675  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1207 05:40:55.033277  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 05:40:55.035900  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 05:40:55.069105  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 05:40:55.113646  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 05:40:55.149094  114211 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 05:40:55.188968  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1207 05:40:55.228004  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1207 05:40:55.268650  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1207 05:40:55.307555  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1207 05:40:55.347982  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 05:40:55.388165  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1207 05:40:55.427897  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1207 05:40:55.467478  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 05:40:55.508084  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1207 05:40:55.547948  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1207 05:40:55.588020  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 05:40:55.627884  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 05:40:55.668406  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 05:40:55.711639  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 05:40:55.748050  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 05:40:55.787883  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 05:40:55.828010  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 05:40:55.869634  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 05:40:55.908007  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 05:40:55.948279  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 05:40:55.990320  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1207 05:40:56.028030  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1207 05:40:56.068034  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1207 05:40:56.108118  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 05:40:56.151181  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 05:40:56.187943  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 05:40:56.227595  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 05:40:56.272099  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 05:40:56.309310  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1207 05:40:56.357667  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 05:40:56.387926  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1207 05:40:56.430286  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 05:40:56.467887  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 05:40:56.507608  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 05:40:56.552694  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 05:40:56.587930  114211 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 05:40:56.629128  114211 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1207 05:40:56.667492  114211 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 05:40:56.707943  114211 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 05:40:56.747602  114211 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1207 05:40:56.788074  114211 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 05:40:56.828162  114211 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1207 05:40:56.868350  114211 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1207 05:40:56.914246  114211 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1207 05:40:56.948039  114211 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 05:40:56.987593  114211 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1207 05:40:57.027520  114211 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 05:40:57.068592  114211 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 05:40:57.108041  114211 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1207 05:40:57.212673  114211 apiserver_test.go:72] Sending request: &{DELETE http://127.0.0.1:41071/apis/extensions/v1beta1/namespaces/status-code/replicasets/apiserver-test7pjk7 HTTP/1.1 1 1 map[] {} 0x708020 0 [] false 127.0.0.1:41071 map[] map[] <nil> map[]   <nil> <nil> <nil> <nil>}
I1207 05:40:57.234856  114211 apiserver_test.go:72] Sending request: &{DELETE http://127.0.0.1:41071/apis/extensions/v1beta1/namespaces/status-code/replicasets/apiserver-test9x5p6 HTTP/1.1 1 1 map[] {} 0x708020 0 [] false 127.0.0.1:41071 map[] map[] <nil> map[]   <nil> <nil> <nil> <nil>}
I1207 05:40:57.242162  114211 controller.go:170] Shutting down kubernetes service endpoint reconciler
				from junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-053957.xml

Filter through log files | View test history on testgrid


Show 578 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1207 05:24:41.127] process 210 exited with code 0 after 0.0m
I1207 05:24:41.127] Call:  gcloud config get-value account
I1207 05:24:41.421] process 223 exited with code 0 after 0.0m
I1207 05:24:41.421] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 05:24:41.421] Call:  kubectl get -oyaml pods/4fb8fa1a-f9e0-11e8-8e0e-0a580a6c001a
W1207 05:24:43.106] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1207 05:24:43.109] Command failed
I1207 05:24:43.109] process 236 exited with code 1 after 0.0m
E1207 05:24:43.109] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/4fb8fa1a-f9e0-11e8-8e0e-0a580a6c001a']' returned non-zero exit status 1
I1207 05:24:43.110] Root: /workspace
I1207 05:24:43.110] cd to /workspace
I1207 05:24:43.110] Checkout: /workspace/k8s.io/kubernetes master:b7030166144bf2f4fd4a71514b4f3e04273270ce,71384:77ec8cda3dd5ec7224759ad74a9b4b2a26eb26bc to /workspace/k8s.io/kubernetes
I1207 05:24:43.110] Call:  git init k8s.io/kubernetes
... skipping 824 lines ...
W1207 05:34:29.464] I1207 05:34:29.459568   55520 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W1207 05:34:29.464] I1207 05:34:29.459613   55520 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
W1207 05:34:29.465] I1207 05:34:29.459645   55520 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W1207 05:34:29.465] I1207 05:34:29.459745   55520 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
W1207 05:34:29.465] I1207 05:34:29.459804   55520 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
W1207 05:34:29.466] I1207 05:34:29.459870   55520 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W1207 05:34:29.466] E1207 05:34:29.459919   55520 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 05:34:29.467] I1207 05:34:29.459936   55520 controllermanager.go:516] Started "resourcequota"
W1207 05:34:29.467] I1207 05:34:29.460986   55520 resource_quota_controller.go:276] Starting resource quota controller
W1207 05:34:29.467] I1207 05:34:29.461018   55520 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W1207 05:34:29.467] I1207 05:34:29.461057   55520 resource_quota_monitor.go:301] QuotaMonitor running
W1207 05:34:29.468] I1207 05:34:29.461953   55520 controllermanager.go:516] Started "serviceaccount"
W1207 05:34:29.468] W1207 05:34:29.461996   55520 controllermanager.go:508] Skipping "csrsigning"
... skipping 14 lines ...
W1207 05:34:29.478] I1207 05:34:29.477115   55520 node_lifecycle_controller.go:378] Controller will taint node by condition.
W1207 05:34:29.478] I1207 05:34:29.477149   55520 controllermanager.go:516] Started "nodelifecycle"
W1207 05:34:29.478] I1207 05:34:29.477781   55520 controllermanager.go:516] Started "persistentvolume-binder"
W1207 05:34:29.478] I1207 05:34:29.478229   55520 controllermanager.go:516] Started "persistentvolume-expander"
W1207 05:34:29.479] I1207 05:34:29.478700   55520 controllermanager.go:516] Started "replicationcontroller"
W1207 05:34:29.479] I1207 05:34:29.479007   55520 controllermanager.go:516] Started "podgc"
W1207 05:34:29.479] W1207 05:34:29.479316   55520 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W1207 05:34:29.479] I1207 05:34:29.479741   55520 controllermanager.go:516] Started "garbagecollector"
W1207 05:34:29.480] I1207 05:34:29.480262   55520 controllermanager.go:516] Started "statefulset"
W1207 05:34:29.480] I1207 05:34:29.480645   55520 controllermanager.go:516] Started "cronjob"
W1207 05:34:29.481] I1207 05:34:29.481054   55520 controllermanager.go:516] Started "endpoint"
W1207 05:34:29.481] W1207 05:34:29.481075   55520 controllermanager.go:508] Skipping "ttl-after-finished"
W1207 05:34:29.485] I1207 05:34:29.485323   55520 controllermanager.go:516] Started "daemonset"
... skipping 39 lines ...
W1207 05:34:29.494] I1207 05:34:29.489191   55520 controllermanager.go:516] Started "pv-protection"
W1207 05:34:29.494] I1207 05:34:29.488677   55520 controller_utils.go:1027] Waiting for caches to sync for deployment controller
W1207 05:34:29.495] W1207 05:34:29.489272   55520 controllermanager.go:508] Skipping "nodeipam"
W1207 05:34:29.495] I1207 05:34:29.489292   55520 pv_protection_controller.go:81] Starting PV protection controller
W1207 05:34:29.495] I1207 05:34:29.488726   55520 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller
W1207 05:34:29.500] I1207 05:34:29.499929   55520 controller_utils.go:1027] Waiting for caches to sync for PV protection controller
W1207 05:34:29.501] E1207 05:34:29.500753   55520 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1207 05:34:29.501] W1207 05:34:29.500775   55520 controllermanager.go:508] Skipping "service"
W1207 05:34:29.501] W1207 05:34:29.501215   55520 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W1207 05:34:29.502] I1207 05:34:29.501755   55520 controllermanager.go:516] Started "attachdetach"
W1207 05:34:29.502] I1207 05:34:29.502468   55520 controllermanager.go:516] Started "disruption"
W1207 05:34:29.502] I1207 05:34:29.502489   55520 attach_detach_controller.go:315] Starting attach detach controller
W1207 05:34:29.503] I1207 05:34:29.502500   55520 controller_utils.go:1027] Waiting for caches to sync for attach detach controller
... skipping 2 lines ...
W1207 05:34:29.503] I1207 05:34:29.503282   55520 disruption.go:288] Starting disruption controller
W1207 05:34:29.503] I1207 05:34:29.503299   55520 controller_utils.go:1027] Waiting for caches to sync for disruption controller
W1207 05:34:29.504] I1207 05:34:29.503332   55520 ttl_controller.go:116] Starting TTL controller
W1207 05:34:29.504] I1207 05:34:29.503342   55520 controller_utils.go:1027] Waiting for caches to sync for TTL controller
W1207 05:34:29.587] I1207 05:34:29.587133   55520 controller_utils.go:1034] Caches are synced for namespace controller
W1207 05:34:29.588] I1207 05:34:29.587501   55520 controller_utils.go:1034] Caches are synced for certificate controller
W1207 05:34:29.589] W1207 05:34:29.588819   55520 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1207 05:34:29.589] I1207 05:34:29.588895   55520 controller_utils.go:1034] Caches are synced for expand controller
W1207 05:34:29.590] I1207 05:34:29.588937   55520 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W1207 05:34:29.600] I1207 05:34:29.600118   55520 controller_utils.go:1034] Caches are synced for PV protection controller
W1207 05:34:29.605] I1207 05:34:29.604708   55520 controller_utils.go:1034] Caches are synced for TTL controller
W1207 05:34:29.618] E1207 05:34:29.617442   55520 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1207 05:34:29.624] E1207 05:34:29.624073   55520 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1207 05:34:29.627] E1207 05:34:29.626659   55520 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1207 05:34:29.663] I1207 05:34:29.662518   55520 controller_utils.go:1034] Caches are synced for service account controller
W1207 05:34:29.666] I1207 05:34:29.666403   52167 controller.go:608] quota admission added evaluator for: serviceaccounts
I1207 05:34:29.768] +++ [1207 05:34:29] On try 3, controller-manager: ok
I1207 05:34:29.768] node/127.0.0.1 created
I1207 05:34:29.769] +++ [1207 05:34:29] Checking kubectl version
I1207 05:34:29.769] Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.904+6cb6e6e5babdc9", GitCommit:"6cb6e6e5babdc9f4e0f4bcad04896f3d15397e08", GitTreeState:"clean", BuildDate:"2018-12-07T05:32:17Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 46 lines ...
I1207 05:34:30.825] Successful: --output json has correct server info
I1207 05:34:30.826] +++ [1207 05:34:30] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I1207 05:34:30.881] Successful: --client --output json has correct client info
I1207 05:34:30.886] Successful: --client --output json has no server info
I1207 05:34:30.888] +++ [1207 05:34:30] Testing kubectl version: compare json output using additional --short flag
W1207 05:34:30.989] I1207 05:34:30.918984   55520 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 05:34:30.990] E1207 05:34:30.936120   55520 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 05:34:30.990] I1207 05:34:30.989033   55520 controller_utils.go:1034] Caches are synced for garbage collector controller
W1207 05:34:30.991] I1207 05:34:30.991210   55520 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W1207 05:34:31.019] I1207 05:34:31.019263   55520 controller_utils.go:1034] Caches are synced for garbage collector controller
I1207 05:34:31.120] Successful: --short --output client json info is equal to non short result
I1207 05:34:31.121] Successful: --short --output server json info is equal to non short result
I1207 05:34:31.121] +++ [1207 05:34:31] Testing kubectl version: compare json output with yaml output
... skipping 45 lines ...
I1207 05:34:33.929] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:34:33.931] +++ command: run_RESTMapper_evaluation_tests
I1207 05:34:33.941] +++ [1207 05:34:33] Creating namespace namespace-1544160873-2507
I1207 05:34:34.016] namespace/namespace-1544160873-2507 created
I1207 05:34:34.103] Context "test" modified.
I1207 05:34:34.109] +++ [1207 05:34:34] Testing RESTMapper
I1207 05:34:34.231] +++ [1207 05:34:34] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1207 05:34:34.244] +++ exit code: 0
I1207 05:34:34.358] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1207 05:34:34.379] bindings                                                                      true         Binding
I1207 05:34:34.380] componentstatuses                 cs                                          false        ComponentStatus
I1207 05:34:34.380] configmaps                        cm                                          true         ConfigMap
I1207 05:34:34.380] endpoints                         ep                                          true         Endpoints
... skipping 606 lines ...
I1207 05:34:55.456] poddisruptionbudget.policy/test-pdb-3 created
I1207 05:34:55.547] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1207 05:34:55.606] poddisruptionbudget.policy/test-pdb-4 created
I1207 05:34:55.682] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1207 05:34:55.819] core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:34:55.977] pod/env-test-pod created
W1207 05:34:56.078] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1207 05:34:56.078] error: setting 'all' parameter but found a non empty selector. 
W1207 05:34:56.078] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 05:34:56.078] I1207 05:34:55.178421   52167 controller.go:608] quota admission added evaluator for: poddisruptionbudgets.policy
W1207 05:34:56.079] error: min-available and max-unavailable cannot be both specified
I1207 05:34:56.179] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I1207 05:34:56.179] Name:               env-test-pod
I1207 05:34:56.179] Namespace:          test-kubectl-describe-pod
I1207 05:34:56.179] Priority:           0
I1207 05:34:56.180] PriorityClassName:  <none>
I1207 05:34:56.180] Node:               <none>
... skipping 145 lines ...
I1207 05:35:09.206] service "modified" deleted
I1207 05:35:09.309] replicationcontroller "modified" deleted
I1207 05:35:09.645] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:09.843] pod/valid-pod created
I1207 05:35:09.960] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:35:10.143] Successful
I1207 05:35:10.143] message:Error from server: cannot restore map from string
I1207 05:35:10.143] has:cannot restore map from string
W1207 05:35:10.244] E1207 05:35:10.135906   52167 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I1207 05:35:10.344] Successful
I1207 05:35:10.345] message:pod/valid-pod patched (no change)
I1207 05:35:10.345] has:patched (no change)
I1207 05:35:10.360] pod/valid-pod patched
I1207 05:35:10.472] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 05:35:10.577] core.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I1207 05:35:11.137] pod/valid-pod patched
I1207 05:35:11.259] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1207 05:35:11.348] pod/valid-pod patched
I1207 05:35:11.456] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1207 05:35:11.666] pod/valid-pod patched
I1207 05:35:11.793] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 05:35:12.005] +++ [1207 05:35:12] "kubectl patch with resourceVersion 491" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I1207 05:35:12.286] pod "valid-pod" deleted
I1207 05:35:12.299] pod/valid-pod replaced
I1207 05:35:12.411] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1207 05:35:12.599] Successful
I1207 05:35:12.600] message:error: --grace-period must have --force specified
I1207 05:35:12.600] has:\-\-grace-period must have \-\-force specified
I1207 05:35:12.804] Successful
I1207 05:35:12.804] message:error: --timeout must have --force specified
I1207 05:35:12.804] has:\-\-timeout must have \-\-force specified
W1207 05:35:12.999] W1207 05:35:12.999293   55520 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1207 05:35:13.100] node/node-v1-test created
I1207 05:35:13.197] node/node-v1-test replaced
I1207 05:35:13.328] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1207 05:35:13.431] node "node-v1-test" deleted
I1207 05:35:13.536] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 05:35:13.833] core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 46 lines ...
I1207 05:35:18.128] pod "test-pod" deleted
I1207 05:35:18.143] +++ exit code: 0
W1207 05:35:18.244] Edit cancelled, no changes made.
W1207 05:35:18.565] Edit cancelled, no changes made.
W1207 05:35:18.565] Edit cancelled, no changes made.
W1207 05:35:18.565] Edit cancelled, no changes made.
W1207 05:35:18.566] error: 'name' already has a value (valid-pod), and --overwrite is false
W1207 05:35:18.566] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 05:35:18.566] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 05:35:18.791] Recording: run_save_config_tests
I1207 05:35:18.791] Running command: run_save_config_tests
I1207 05:35:18.807] 
I1207 05:35:18.808] +++ Running case: test-cmd.run_save_config_tests 
... skipping 53 lines ...
I1207 05:35:23.129] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1207 05:35:23.132] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:35:23.134] +++ command: run_kubectl_create_error_tests
I1207 05:35:23.144] +++ [1207 05:35:23] Creating namespace namespace-1544160923-14284
I1207 05:35:23.253] namespace/namespace-1544160923-14284 created
I1207 05:35:23.345] Context "test" modified.
I1207 05:35:23.352] +++ [1207 05:35:23] Testing kubectl create with error
W1207 05:35:23.453] Error: required flag(s) "filename" not set
W1207 05:35:23.453] 
W1207 05:35:23.453] 
W1207 05:35:23.454] Examples:
W1207 05:35:23.454]   # Create a pod using the data in pod.json.
W1207 05:35:23.454]   kubectl create -f ./pod.json
W1207 05:35:23.454]   
... skipping 38 lines ...
W1207 05:35:23.461]   kubectl create -f FILENAME [options]
W1207 05:35:23.461] 
W1207 05:35:23.461] Use "kubectl <command> --help" for more information about a given command.
W1207 05:35:23.462] Use "kubectl options" for a list of global command-line options (applies to all commands).
W1207 05:35:23.462] 
W1207 05:35:23.462] required flag(s) "filename" not set
I1207 05:35:23.617] +++ [1207 05:35:23] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1207 05:35:23.718] kubectl convert is DEPRECATED and will be removed in a future version.
W1207 05:35:23.719] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 05:35:23.819] +++ exit code: 0
I1207 05:35:23.838] Recording: run_kubectl_apply_tests
I1207 05:35:23.838] Running command: run_kubectl_apply_tests
I1207 05:35:23.853] 
... skipping 21 lines ...
W1207 05:35:26.251] I1207 05:35:25.675729   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544160923-7672", Name:"test-deployment-retainkeys", UID:"e5914310-f9e1-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-deployment-retainkeys-7495cff5f to 1
W1207 05:35:26.252] I1207 05:35:25.684147   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544160923-7672", Name:"test-deployment-retainkeys-7495cff5f", UID:"e6032935-f9e1-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"504", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-7495cff5f-9jp9f
I1207 05:35:26.352] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:26.417] pod/selector-test-pod created
I1207 05:35:26.512] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 05:35:26.611] Successful
I1207 05:35:26.612] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 05:35:26.612] has:pods "selector-test-pod-dont-apply" not found
I1207 05:35:26.709] pod "selector-test-pod" deleted
I1207 05:35:26.821] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:27.041] pod/test-pod created (server dry run)
I1207 05:35:27.131] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:27.478] pod/test-pod created
... skipping 4 lines ...
W1207 05:35:28.355] I1207 05:35:28.355137   52167 clientconn.go:551] parsed scheme: ""
W1207 05:35:28.355] I1207 05:35:28.355169   52167 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 05:35:28.356] I1207 05:35:28.355237   52167 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 05:35:28.356] I1207 05:35:28.355303   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:35:28.358] I1207 05:35:28.357771   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:35:28.438] I1207 05:35:28.438036   52167 controller.go:608] quota admission added evaluator for: resources.mygroup.example.com
W1207 05:35:28.525] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1207 05:35:28.626] kind.mygroup.example.com/myobj created (server dry run)
I1207 05:35:28.626] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1207 05:35:28.708] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:28.858] pod/a created
I1207 05:35:30.359] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I1207 05:35:30.451] Successful
I1207 05:35:30.451] message:Error from server (NotFound): pods "b" not found
I1207 05:35:30.452] has:pods "b" not found
I1207 05:35:30.595] pod/b created
I1207 05:35:30.607] pod/a pruned
I1207 05:35:32.279] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I1207 05:35:32.348] Successful
I1207 05:35:32.348] message:Error from server (NotFound): pods "a" not found
I1207 05:35:32.348] has:pods "a" not found
I1207 05:35:32.413] pod "b" deleted
I1207 05:35:32.487] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:32.618] pod/a created
I1207 05:35:32.694] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I1207 05:35:32.765] Successful
I1207 05:35:32.765] message:Error from server (NotFound): pods "b" not found
I1207 05:35:32.765] has:pods "b" not found
I1207 05:35:32.894] pod/b created
I1207 05:35:32.980] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I1207 05:35:33.052] apply.sh:166: Successful get pods b {{.metadata.name}}: b
I1207 05:35:33.118] pod "a" deleted
I1207 05:35:33.122] pod "b" deleted
I1207 05:35:33.253] Successful
I1207 05:35:33.253] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I1207 05:35:33.253] has:all resources selected for prune without explicitly passing --all
I1207 05:35:33.377] pod/a created
I1207 05:35:33.382] pod/b created
I1207 05:35:33.389] service/prune-svc created
I1207 05:35:34.912] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I1207 05:35:35.025] apply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 138 lines ...
I1207 05:35:47.983] Context "test" modified.
I1207 05:35:47.988] +++ [1207 05:35:47] Testing kubectl create filter
I1207 05:35:48.064] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:48.200] pod/selector-test-pod created
I1207 05:35:48.285] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 05:35:48.366] Successful
I1207 05:35:48.366] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 05:35:48.366] has:pods "selector-test-pod-dont-apply" not found
I1207 05:35:48.434] pod "selector-test-pod" deleted
I1207 05:35:48.450] +++ exit code: 0
I1207 05:35:49.101] Recording: run_kubectl_apply_deployments_tests
I1207 05:35:49.101] Running command: run_kubectl_apply_deployments_tests
I1207 05:35:49.116] 
... skipping 28 lines ...
I1207 05:35:50.918] apps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:50.998] apps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:51.078] apps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:35:51.217] deployment.extensions/nginx created
I1207 05:35:51.310] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I1207 05:35:55.485] Successful
I1207 05:35:55.485] message:Error from server (Conflict): error when applying patch:
I1207 05:35:55.486] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544160949-23801\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1207 05:35:55.486] to:
I1207 05:35:55.486] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I1207 05:35:55.486] Name: "nginx", Namespace: "namespace-1544160949-23801"
I1207 05:35:55.487] Object: &{map["spec":map["template":map["spec":map["containers":[map["terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[]]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"] "metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]]] "status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2018-12-07T05:35:51Z" "lastTransitionTime":"2018-12-07T05:35:51Z"]]] "kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["name":"nginx" "namespace":"namespace-1544160949-23801" "uid":"f53c9048-f9e1-11e8-9dca-0242ac110002" "resourceVersion":"711" "generation":'\x01' "creationTimestamp":"2018-12-07T05:35:51Z" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1544160949-23801/deployments/nginx" "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544160949-23801\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"]]]}
I1207 05:35:55.488] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I1207 05:35:55.488] has:Error from server (Conflict)
W1207 05:35:55.588] I1207 05:35:49.627799   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544160949-23801", Name:"my-depl", UID:"f449baad-f9e1-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-559b7bc95d to 1
W1207 05:35:55.589] I1207 05:35:49.631542   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544160949-23801", Name:"my-depl-559b7bc95d", UID:"f44a2a01-f9e1-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-559b7bc95d-hwlp8
W1207 05:35:55.589] I1207 05:35:50.100667   52167 controller.go:608] quota admission added evaluator for: replicasets.extensions
W1207 05:35:55.589] I1207 05:35:50.105393   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544160949-23801", Name:"my-depl", UID:"f449baad-f9e1-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-6676598dcb to 1
W1207 05:35:55.589] I1207 05:35:50.107877   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544160949-23801", Name:"my-depl-6676598dcb", UID:"f4930b32-f9e1-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-6676598dcb-kp7nq
W1207 05:35:55.590] I1207 05:35:51.222210   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544160949-23801", Name:"nginx", UID:"f53c9048-f9e1-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
... skipping 91 lines ...
I1207 05:36:08.166] +++ [1207 05:36:08] Creating namespace namespace-1544160968-30919
I1207 05:36:08.272] namespace/namespace-1544160968-30919 created
I1207 05:36:08.349] Context "test" modified.
I1207 05:36:08.354] +++ [1207 05:36:08] Testing kubectl get
I1207 05:36:08.455] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:08.572] Successful
I1207 05:36:08.572] message:Error from server (NotFound): pods "abc" not found
I1207 05:36:08.573] has:pods "abc" not found
I1207 05:36:08.677] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:08.783] Successful
I1207 05:36:08.783] message:Error from server (NotFound): pods "abc" not found
I1207 05:36:08.783] has:pods "abc" not found
I1207 05:36:08.896] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:08.966] Successful
I1207 05:36:08.967] message:{
I1207 05:36:08.967]     "apiVersion": "v1",
I1207 05:36:08.967]     "items": [],
... skipping 23 lines ...
I1207 05:36:09.253] has not:No resources found
I1207 05:36:09.324] Successful
I1207 05:36:09.324] message:NAME
I1207 05:36:09.324] has not:No resources found
I1207 05:36:09.398] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:09.493] Successful
I1207 05:36:09.493] message:error: the server doesn't have a resource type "foobar"
I1207 05:36:09.494] has not:No resources found
I1207 05:36:09.572] Successful
I1207 05:36:09.572] message:No resources found.
I1207 05:36:09.572] has:No resources found
I1207 05:36:09.650] Successful
I1207 05:36:09.650] message:
I1207 05:36:09.650] has not:No resources found
I1207 05:36:09.722] Successful
I1207 05:36:09.722] message:No resources found.
I1207 05:36:09.723] has:No resources found
I1207 05:36:09.804] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:09.883] Successful
I1207 05:36:09.883] message:Error from server (NotFound): pods "abc" not found
I1207 05:36:09.883] has:pods "abc" not found
I1207 05:36:09.884] FAIL!
I1207 05:36:09.885] message:Error from server (NotFound): pods "abc" not found
I1207 05:36:09.885] has not:List
I1207 05:36:09.885] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1207 05:36:09.981] Successful
I1207 05:36:09.982] message:I1207 05:36:09.939089   67420 loader.go:359] Config loaded from file /tmp/tmp.qDStPJZmtr/.kube/config
I1207 05:36:09.982] I1207 05:36:09.939572   67420 loader.go:359] Config loaded from file /tmp/tmp.qDStPJZmtr/.kube/config
I1207 05:36:09.982] I1207 05:36:09.940676   67420 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
... skipping 995 lines ...
I1207 05:36:13.305] }
I1207 05:36:13.382] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:36:13.595] <no value>Successful
I1207 05:36:13.595] message:valid-pod:
I1207 05:36:13.595] has:valid-pod:
I1207 05:36:13.667] Successful
I1207 05:36:13.668] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1207 05:36:13.668] 	template was:
I1207 05:36:13.668] 		{.missing}
I1207 05:36:13.668] 	object given to jsonpath engine was:
I1207 05:36:13.669] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"namespace-1544160972-3052", "selfLink":"/api/v1/namespaces/namespace-1544160972-3052/pods/valid-pod", "uid":"025b0f3c-f9e2-11e8-9dca-0242ac110002", "resourceVersion":"808", "creationTimestamp":"2018-12-07T05:36:13Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod"}, "spec":map[string]interface {}{"dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I1207 05:36:13.669] has:missing is not found
I1207 05:36:13.739] Successful
I1207 05:36:13.739] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1207 05:36:13.740] 	template was:
I1207 05:36:13.740] 		{{.missing}}
I1207 05:36:13.740] 	raw data was:
I1207 05:36:13.740] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2018-12-07T05:36:13Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1544160972-3052","resourceVersion":"808","selfLink":"/api/v1/namespaces/namespace-1544160972-3052/pods/valid-pod","uid":"025b0f3c-f9e2-11e8-9dca-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1207 05:36:13.740] 	object given to template engine was:
I1207 05:36:13.741] 		map[apiVersion:v1 kind:Pod metadata:map[name:valid-pod namespace:namespace-1544160972-3052 resourceVersion:808 selfLink:/api/v1/namespaces/namespace-1544160972-3052/pods/valid-pod uid:025b0f3c-f9e2-11e8-9dca-0242ac110002 creationTimestamp:2018-12-07T05:36:13Z labels:map[name:valid-pod]] spec:map[priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[requests:map[cpu:1 memory:512Mi] limits:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log]] dnsPolicy:ClusterFirst enableServiceLinks:true] status:map[phase:Pending qosClass:Guaranteed]]
I1207 05:36:13.741] has:map has no entry for key "missing"
W1207 05:36:13.841] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W1207 05:36:14.809] E1207 05:36:14.808947   67801 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I1207 05:36:14.910] Successful
I1207 05:36:14.910] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 05:36:14.910] valid-pod   0/1     Pending   0          0s
I1207 05:36:14.910] has:STATUS
I1207 05:36:14.911] Successful
... skipping 80 lines ...
I1207 05:36:17.067]   terminationGracePeriodSeconds: 30
I1207 05:36:17.067] status:
I1207 05:36:17.067]   phase: Pending
I1207 05:36:17.067]   qosClass: Guaranteed
I1207 05:36:17.067] has:name: valid-pod
I1207 05:36:17.067] Successful
I1207 05:36:17.067] message:Error from server (NotFound): pods "invalid-pod" not found
I1207 05:36:17.067] has:"invalid-pod" not found
I1207 05:36:17.117] pod "valid-pod" deleted
I1207 05:36:17.202] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:17.337] pod/redis-master created
I1207 05:36:17.342] pod/valid-pod created
I1207 05:36:17.422] Successful
... skipping 284 lines ...
I1207 05:36:20.212] message:NAME
I1207 05:36:20.212] sample-role
I1207 05:36:20.213] has:NAME
I1207 05:36:20.213] sample-role
W1207 05:36:20.313] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1207 05:36:20.313] I1207 05:36:19.692786   55520 event.go:221] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1544160977-8693", Name:"pi", UID:"06353d3b-f9e2-11e8-9dca-0242ac110002", APIVersion:"batch/v1", ResourceVersion:"845", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-5lqfx
W1207 05:36:20.358] E1207 05:36:20.357764   52167 autoregister_controller.go:190] v1.company.com failed with : apiservices.apiregistration.k8s.io "v1.company.com" already exists
I1207 05:36:20.459] customresourcedefinition.apiextensions.k8s.io/foos.company.com created
I1207 05:36:20.459] old-print.sh:120: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: foos.company.com:
I1207 05:36:20.561] old-print.sh:123: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:20.714] Successful
I1207 05:36:20.714] message:
I1207 05:36:20.714] has:
... skipping 16 lines ...
I1207 05:36:21.510] Running command: run_create_secret_tests
I1207 05:36:21.525] 
I1207 05:36:21.526] +++ Running case: test-cmd.run_create_secret_tests 
I1207 05:36:21.528] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:36:21.531] +++ command: run_create_secret_tests
I1207 05:36:21.621] Successful
I1207 05:36:21.621] message:Error from server (NotFound): secrets "mysecret" not found
I1207 05:36:21.621] has:secrets "mysecret" not found
I1207 05:36:21.765] Successful
I1207 05:36:21.765] message:Error from server (NotFound): secrets "mysecret" not found
I1207 05:36:21.765] has:secrets "mysecret" not found
I1207 05:36:21.767] Successful
I1207 05:36:21.767] message:user-specified
I1207 05:36:21.767] has:user-specified
I1207 05:36:21.832] Successful
I1207 05:36:21.905] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"0786e1b3-f9e2-11e8-9dca-0242ac110002","resourceVersion":"882","creationTimestamp":"2018-12-07T05:36:21Z"}}
... skipping 80 lines ...
I1207 05:36:24.376] has:Timeout exceeded while reading body
I1207 05:36:24.445] Successful
I1207 05:36:24.445] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 05:36:24.445] valid-pod   0/1     Pending   0          1s
I1207 05:36:24.445] has:valid-pod
I1207 05:36:24.503] Successful
I1207 05:36:24.503] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1207 05:36:24.504] has:Invalid timeout value
I1207 05:36:24.572] pod "valid-pod" deleted
I1207 05:36:24.587] +++ exit code: 0
I1207 05:36:24.708] Recording: run_crd_tests
I1207 05:36:24.708] Running command: run_crd_tests
I1207 05:36:24.724] 
... skipping 47 lines ...
I1207 05:36:27.445] foo.company.com/test
I1207 05:36:27.510] foo.company.com/test
I1207 05:36:27.587] NAME   AGE
I1207 05:36:27.588] test   1s
I1207 05:36:27.665] NAME   AGE
I1207 05:36:27.666] test   1s
W1207 05:36:27.766] E1207 05:36:25.433955   52167 autoregister_controller.go:190] v1alpha1.mygroup.example.com failed with : apiservices.apiregistration.k8s.io "v1alpha1.mygroup.example.com" already exists
W1207 05:36:27.863] I1207 05:36:25.964784   52167 clientconn.go:551] parsed scheme: ""
W1207 05:36:27.863] I1207 05:36:25.964811   52167 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 05:36:27.863] I1207 05:36:25.964863   52167 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 05:36:27.863] I1207 05:36:25.964904   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:36:27.863] I1207 05:36:25.965195   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:36:27.864] I1207 05:36:26.023504   52167 clientconn.go:551] parsed scheme: ""
... skipping 108 lines ...
I1207 05:36:28.501] foo.company.com/test patched
I1207 05:36:28.583] crd.sh:237: Successful get foos/test {{.patched}}: value1
I1207 05:36:28.654] foo.company.com/test patched
I1207 05:36:28.734] crd.sh:239: Successful get foos/test {{.patched}}: value2
I1207 05:36:28.807] foo.company.com/test patched
I1207 05:36:28.880] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I1207 05:36:29.008] +++ [1207 05:36:29] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1207 05:36:29.059] {
I1207 05:36:29.059]     "apiVersion": "company.com/v1",
I1207 05:36:29.060]     "kind": "Foo",
I1207 05:36:29.060]     "metadata": {
I1207 05:36:29.060]         "annotations": {
I1207 05:36:29.060]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 112 lines ...
I1207 05:36:30.250] bar.company.com "test" deleted
W1207 05:36:30.351] I1207 05:36:30.033356   52167 controller.go:608] quota admission added evaluator for: bars.company.com
W1207 05:36:30.351] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 70259 Killed                  while [ ${tries} -lt 10 ]; do
W1207 05:36:30.351]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W1207 05:36:30.351] done
W1207 05:36:30.351] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 70258 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W1207 05:36:31.061] E1207 05:36:31.060415   55520 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources"]
W1207 05:36:31.248] I1207 05:36:31.248283   55520 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 05:36:31.249] I1207 05:36:31.249350   52167 clientconn.go:551] parsed scheme: ""
W1207 05:36:31.249] I1207 05:36:31.249440   52167 clientconn.go:557] scheme "" not registered, fallback to default scheme
W1207 05:36:31.250] I1207 05:36:31.249518   52167 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W1207 05:36:31.250] I1207 05:36:31.249646   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:36:31.250] I1207 05:36:31.249996   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 66 lines ...
I1207 05:36:41.955] crd.sh:459: Successful get bars {{len .items}}: 0
I1207 05:36:42.106] customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I1207 05:36:42.194] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I1207 05:36:42.281] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1207 05:36:42.368] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I1207 05:36:42.394] +++ exit code: 0
W1207 05:36:42.495] Error from server (NotFound): namespaces "non-native-resources" not found
I1207 05:36:43.037] Recording: run_cmd_with_img_tests
I1207 05:36:43.037] Running command: run_cmd_with_img_tests
I1207 05:36:43.055] 
I1207 05:36:43.056] +++ Running case: test-cmd.run_cmd_with_img_tests 
I1207 05:36:43.058] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:36:43.060] +++ command: run_cmd_with_img_tests
... skipping 3 lines ...
I1207 05:36:43.203] +++ [1207 05:36:43] Testing cmd with image
I1207 05:36:43.284] Successful
I1207 05:36:43.284] message:deployment.apps/test1 created
I1207 05:36:43.284] has:deployment.apps/test1 created
I1207 05:36:43.355] deployment.extensions "test1" deleted
I1207 05:36:43.426] Successful
I1207 05:36:43.426] message:error: Invalid image name "InvalidImageName": invalid reference format
I1207 05:36:43.426] has:error: Invalid image name "InvalidImageName": invalid reference format
I1207 05:36:43.439] +++ exit code: 0
I1207 05:36:43.515] Recording: run_recursive_resources_tests
I1207 05:36:43.516] Running command: run_recursive_resources_tests
I1207 05:36:43.534] 
I1207 05:36:43.535] +++ Running case: test-cmd.run_recursive_resources_tests 
I1207 05:36:43.537] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I1207 05:36:43.681] Context "test" modified.
I1207 05:36:43.762] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:44.000] generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:44.002] Successful
I1207 05:36:44.003] message:pod/busybox0 created
I1207 05:36:44.003] pod/busybox1 created
I1207 05:36:44.003] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 05:36:44.003] has:error validating data: kind not set
I1207 05:36:44.084] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:44.242] generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1207 05:36:44.244] Successful
I1207 05:36:44.244] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:44.244] has:Object 'Kind' is missing
I1207 05:36:44.329] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:44.573] generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 05:36:44.575] Successful
I1207 05:36:44.575] message:pod/busybox0 replaced
I1207 05:36:44.575] pod/busybox1 replaced
I1207 05:36:44.575] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 05:36:44.575] has:error validating data: kind not set
I1207 05:36:44.659] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:44.750] Successful
I1207 05:36:44.750] message:Name:               busybox0
I1207 05:36:44.750] Namespace:          namespace-1544161003-8804
I1207 05:36:44.750] Priority:           0
I1207 05:36:44.750] PriorityClassName:  <none>
... skipping 159 lines ...
I1207 05:36:44.761] has:Object 'Kind' is missing
I1207 05:36:44.839] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:45.007] generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1207 05:36:45.009] Successful
I1207 05:36:45.010] message:pod/busybox0 annotated
I1207 05:36:45.010] pod/busybox1 annotated
I1207 05:36:45.010] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:45.010] has:Object 'Kind' is missing
I1207 05:36:45.091] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:45.342] generic-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 05:36:45.344] Successful
I1207 05:36:45.344] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 05:36:45.344] pod/busybox0 configured
I1207 05:36:45.344] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 05:36:45.344] pod/busybox1 configured
I1207 05:36:45.344] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 05:36:45.344] has:error validating data: kind not set
I1207 05:36:45.423] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:45.583] deployment.extensions/nginx created
I1207 05:36:45.673] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1207 05:36:45.762] generic-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:36:45.907] generic-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I1207 05:36:45.909] Successful
... skipping 42 lines ...
I1207 05:36:45.981] deployment.extensions "nginx" deleted
I1207 05:36:46.070] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:46.227] generic-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:46.229] Successful
I1207 05:36:46.230] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1207 05:36:46.230] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 05:36:46.230] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:46.230] has:Object 'Kind' is missing
I1207 05:36:46.310] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:47.406] Successful
I1207 05:36:47.406] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:47.407] has:busybox0:busybox1:
I1207 05:36:47.408] Successful
I1207 05:36:47.409] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:47.409] has:Object 'Kind' is missing
W1207 05:36:47.509] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1207 05:36:47.510] I1207 05:36:43.275194   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161003-31243", Name:"test1", UID:"14439f1a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"987", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W1207 05:36:47.510] I1207 05:36:43.279711   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161003-31243", Name:"test1-fb488bd5d", UID:"14441617-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-fp9bw
W1207 05:36:47.511] I1207 05:36:45.585891   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161003-8804", Name:"nginx", UID:"15a434b6-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W1207 05:36:47.511] I1207 05:36:45.588219   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161003-8804", Name:"nginx-6f6bb85d9c", UID:"15a4b02a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-xjpct
W1207 05:36:47.511] I1207 05:36:45.590982   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161003-8804", Name:"nginx-6f6bb85d9c", UID:"15a4b02a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-pwxbm
W1207 05:36:47.512] I1207 05:36:45.591022   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161003-8804", Name:"nginx-6f6bb85d9c", UID:"15a4b02a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-gg4ps
W1207 05:36:47.512] kubectl convert is DEPRECATED and will be removed in a future version.
W1207 05:36:47.512] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1207 05:36:47.512] I1207 05:36:46.864376   55520 namespace_controller.go:171] Namespace has been deleted non-native-resources
I1207 05:36:47.712] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:47.874] pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:47.998] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1207 05:36:48.247] Successful
I1207 05:36:48.247] message:pod/busybox0 labeled
I1207 05:36:48.247] pod/busybox1 labeled
I1207 05:36:48.248] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:48.248] has:Object 'Kind' is missing
I1207 05:36:48.337] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:48.455] pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:48.555] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1207 05:36:48.557] Successful
I1207 05:36:48.557] message:pod/busybox0 patched
I1207 05:36:48.557] pod/busybox1 patched
I1207 05:36:48.557] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:48.557] has:Object 'Kind' is missing
I1207 05:36:48.659] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:48.841] generic-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:48.843] Successful
I1207 05:36:48.843] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 05:36:48.844] pod "busybox0" force deleted
I1207 05:36:48.844] pod "busybox1" force deleted
I1207 05:36:48.844] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 05:36:48.844] has:Object 'Kind' is missing
I1207 05:36:48.935] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:49.161] replicationcontroller/busybox0 created
I1207 05:36:49.168] replicationcontroller/busybox1 created
I1207 05:36:49.270] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:49.356] generic-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:49.443] generic-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 05:36:49.526] generic-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 05:36:49.724] generic-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 05:36:49.825] generic-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 05:36:49.827] Successful
I1207 05:36:49.827] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1207 05:36:49.828] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1207 05:36:49.828] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:49.828] has:Object 'Kind' is missing
I1207 05:36:49.906] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1207 05:36:49.984] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1207 05:36:50.077] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:50.161] generic-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 05:36:50.247] generic-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 05:36:50.417] generic-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 05:36:50.548] generic-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 05:36:50.551] Successful
I1207 05:36:50.551] message:service/busybox0 exposed
I1207 05:36:50.551] service/busybox1 exposed
I1207 05:36:50.552] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:50.552] has:Object 'Kind' is missing
I1207 05:36:50.638] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:50.720] generic-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 05:36:50.804] generic-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 05:36:50.999] generic-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I1207 05:36:51.081] generic-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I1207 05:36:51.083] Successful
I1207 05:36:51.083] message:replicationcontroller/busybox0 scaled
I1207 05:36:51.083] replicationcontroller/busybox1 scaled
I1207 05:36:51.084] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:51.084] has:Object 'Kind' is missing
I1207 05:36:51.166] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 05:36:51.372] generic-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:51.374] Successful
I1207 05:36:51.374] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 05:36:51.374] replicationcontroller "busybox0" force deleted
I1207 05:36:51.374] replicationcontroller "busybox1" force deleted
I1207 05:36:51.374] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:51.374] has:Object 'Kind' is missing
I1207 05:36:51.456] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:51.656] deployment.extensions/nginx1-deployment created
I1207 05:36:51.678] deployment.extensions/nginx0-deployment created
W1207 05:36:51.779] I1207 05:36:49.165091   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161003-8804", Name:"busybox0", UID:"17c63dd2-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1045", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cfvhq
W1207 05:36:51.779] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 05:36:51.779] I1207 05:36:49.170431   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161003-8804", Name:"busybox1", UID:"17c713f8-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-rbhh8
W1207 05:36:51.779] I1207 05:36:50.908571   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161003-8804", Name:"busybox0", UID:"17c63dd2-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1066", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-8s248
W1207 05:36:51.780] I1207 05:36:50.915660   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161003-8804", Name:"busybox1", UID:"17c713f8-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-2kspn
W1207 05:36:51.780] I1207 05:36:51.659338   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161003-8804", Name:"nginx1-deployment", UID:"1942d676-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W1207 05:36:51.780] I1207 05:36:51.661531   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161003-8804", Name:"nginx1-deployment-75f6fc6747", UID:"194364f6-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-zzczt
W1207 05:36:51.781] I1207 05:36:51.663442   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161003-8804", Name:"nginx1-deployment-75f6fc6747", UID:"194364f6-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-9lz2v
W1207 05:36:51.781] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 05:36:51.781] I1207 05:36:51.680473   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161003-8804", Name:"nginx0-deployment", UID:"19464d63-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1098", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W1207 05:36:51.781] I1207 05:36:51.682652   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161003-8804", Name:"nginx0-deployment-b6bb4ccbb", UID:"1946af65-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1099", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-6zgjf
W1207 05:36:51.781] I1207 05:36:51.684473   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161003-8804", Name:"nginx0-deployment-b6bb4ccbb", UID:"1946af65-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1099", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-w9jz9
I1207 05:36:51.882] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1207 05:36:51.896] generic-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 05:36:52.108] generic-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 05:36:52.110] Successful
I1207 05:36:52.111] message:deployment.extensions/nginx1-deployment skipped rollback (current template already matches revision 1)
I1207 05:36:52.111] deployment.extensions/nginx0-deployment skipped rollback (current template already matches revision 1)
I1207 05:36:52.111] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:36:52.111] has:Object 'Kind' is missing
I1207 05:36:52.191] deployment.extensions/nginx1-deployment paused
I1207 05:36:52.194] deployment.extensions/nginx0-deployment paused
I1207 05:36:52.304] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1207 05:36:52.306] Successful
I1207 05:36:52.307] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I1207 05:36:52.586] 1         <none>
I1207 05:36:52.586] 
I1207 05:36:52.586] deployment.extensions/nginx0-deployment 
I1207 05:36:52.586] REVISION  CHANGE-CAUSE
I1207 05:36:52.586] 1         <none>
I1207 05:36:52.586] 
I1207 05:36:52.587] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:36:52.587] has:nginx0-deployment
I1207 05:36:52.588] Successful
I1207 05:36:52.588] message:deployment.extensions/nginx1-deployment 
I1207 05:36:52.588] REVISION  CHANGE-CAUSE
I1207 05:36:52.588] 1         <none>
I1207 05:36:52.588] 
I1207 05:36:52.588] deployment.extensions/nginx0-deployment 
I1207 05:36:52.588] REVISION  CHANGE-CAUSE
I1207 05:36:52.589] 1         <none>
I1207 05:36:52.589] 
I1207 05:36:52.589] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:36:52.589] has:nginx1-deployment
I1207 05:36:52.590] Successful
I1207 05:36:52.590] message:deployment.extensions/nginx1-deployment 
I1207 05:36:52.590] REVISION  CHANGE-CAUSE
I1207 05:36:52.590] 1         <none>
I1207 05:36:52.591] 
I1207 05:36:52.591] deployment.extensions/nginx0-deployment 
I1207 05:36:52.591] REVISION  CHANGE-CAUSE
I1207 05:36:52.591] 1         <none>
I1207 05:36:52.591] 
I1207 05:36:52.591] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:36:52.591] has:Object 'Kind' is missing
W1207 05:36:52.701] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 05:36:52.716] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 05:36:52.817] deployment.extensions "nginx1-deployment" force deleted
I1207 05:36:52.817] deployment.extensions "nginx0-deployment" force deleted
I1207 05:36:53.820] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:36:53.959] replicationcontroller/busybox0 created
I1207 05:36:53.963] replicationcontroller/busybox1 created
I1207 05:36:54.055] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
... skipping 6 lines ...
I1207 05:36:54.140] message:no rollbacker has been implemented for "ReplicationController"
I1207 05:36:54.141] no rollbacker has been implemented for "ReplicationController"
I1207 05:36:54.141] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:54.141] has:Object 'Kind' is missing
I1207 05:36:54.220] Successful
I1207 05:36:54.221] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:54.221] error: replicationcontrollers "busybox0" pausing is not supported
I1207 05:36:54.221] error: replicationcontrollers "busybox1" pausing is not supported
I1207 05:36:54.221] has:Object 'Kind' is missing
I1207 05:36:54.222] Successful
I1207 05:36:54.223] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:54.223] error: replicationcontrollers "busybox0" pausing is not supported
I1207 05:36:54.223] error: replicationcontrollers "busybox1" pausing is not supported
I1207 05:36:54.223] has:replicationcontrollers "busybox0" pausing is not supported
I1207 05:36:54.225] Successful
I1207 05:36:54.225] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:54.226] error: replicationcontrollers "busybox0" pausing is not supported
I1207 05:36:54.226] error: replicationcontrollers "busybox1" pausing is not supported
I1207 05:36:54.226] has:replicationcontrollers "busybox1" pausing is not supported
I1207 05:36:54.308] Successful
I1207 05:36:54.308] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:54.308] error: replicationcontrollers "busybox0" resuming is not supported
I1207 05:36:54.308] error: replicationcontrollers "busybox1" resuming is not supported
I1207 05:36:54.308] has:Object 'Kind' is missing
I1207 05:36:54.310] Successful
I1207 05:36:54.310] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:54.310] error: replicationcontrollers "busybox0" resuming is not supported
I1207 05:36:54.310] error: replicationcontrollers "busybox1" resuming is not supported
I1207 05:36:54.310] has:replicationcontrollers "busybox0" resuming is not supported
I1207 05:36:54.311] Successful
I1207 05:36:54.312] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:54.312] error: replicationcontrollers "busybox0" resuming is not supported
I1207 05:36:54.312] error: replicationcontrollers "busybox1" resuming is not supported
I1207 05:36:54.312] has:replicationcontrollers "busybox0" resuming is not supported
I1207 05:36:54.382] replicationcontroller "busybox0" force deleted
I1207 05:36:54.388] replicationcontroller "busybox1" force deleted
W1207 05:36:54.488] I1207 05:36:53.961551   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161003-8804", Name:"busybox0", UID:"1aa23095-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1131", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cmwtm
W1207 05:36:54.489] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 05:36:54.489] I1207 05:36:53.964740   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161003-8804", Name:"busybox1", UID:"1aa2e180-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1133", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xt2jc
W1207 05:36:54.489] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 05:36:54.489] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 05:36:55.480] +++ exit code: 0
I1207 05:36:55.700] Recording: run_namespace_tests
I1207 05:36:55.700] Running command: run_namespace_tests
I1207 05:36:55.799] 
I1207 05:36:55.801] +++ Running case: test-cmd.run_namespace_tests 
I1207 05:36:55.803] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:36:55.806] +++ command: run_namespace_tests
I1207 05:36:55.814] +++ [1207 05:36:55] Testing kubectl(v1:namespaces)
I1207 05:36:55.913] namespace/my-namespace created
I1207 05:36:55.996] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1207 05:36:56.068] namespace "my-namespace" deleted
W1207 05:37:01.067] E1207 05:37:01.066693   55520 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I1207 05:37:01.172] namespace/my-namespace condition met
I1207 05:37:01.252] Successful
I1207 05:37:01.252] message:Error from server (NotFound): namespaces "my-namespace" not found
I1207 05:37:01.253] has: not found
I1207 05:37:01.357] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I1207 05:37:01.427] namespace/other created
I1207 05:37:01.513] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I1207 05:37:01.624] core.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:37:01.794] pod/valid-pod created
I1207 05:37:01.881] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:37:01.965] core.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:37:02.037] Successful
I1207 05:37:02.038] message:error: a resource cannot be retrieved by name across all namespaces
I1207 05:37:02.038] has:a resource cannot be retrieved by name across all namespaces
I1207 05:37:02.118] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 05:37:02.190] pod "valid-pod" force deleted
I1207 05:37:02.274] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:37:02.346] namespace "other" deleted
W1207 05:37:02.446] I1207 05:37:01.370352   55520 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
... skipping 119 lines ...
I1207 05:37:23.095] +++ command: run_client_config_tests
I1207 05:37:23.107] +++ [1207 05:37:23] Creating namespace namespace-1544161043-12099
I1207 05:37:23.172] namespace/namespace-1544161043-12099 created
I1207 05:37:23.239] Context "test" modified.
I1207 05:37:23.246] +++ [1207 05:37:23] Testing client config
I1207 05:37:23.309] Successful
I1207 05:37:23.309] message:error: stat missing: no such file or directory
I1207 05:37:23.309] has:missing: no such file or directory
I1207 05:37:23.371] Successful
I1207 05:37:23.372] message:error: stat missing: no such file or directory
I1207 05:37:23.372] has:missing: no such file or directory
I1207 05:37:23.438] Successful
I1207 05:37:23.438] message:error: stat missing: no such file or directory
I1207 05:37:23.438] has:missing: no such file or directory
I1207 05:37:23.505] Successful
I1207 05:37:23.505] message:Error in configuration: context was not found for specified context: missing-context
I1207 05:37:23.505] has:context was not found for specified context: missing-context
I1207 05:37:23.571] Successful
I1207 05:37:23.572] message:error: no server found for cluster "missing-cluster"
I1207 05:37:23.572] has:no server found for cluster "missing-cluster"
I1207 05:37:23.638] Successful
I1207 05:37:23.638] message:error: auth info "missing-user" does not exist
I1207 05:37:23.638] has:auth info "missing-user" does not exist
I1207 05:37:23.764] Successful
I1207 05:37:23.764] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1207 05:37:23.764] has:Error loading config file
I1207 05:37:23.826] Successful
I1207 05:37:23.826] message:error: stat missing-config: no such file or directory
I1207 05:37:23.826] has:no such file or directory
I1207 05:37:23.838] +++ exit code: 0
I1207 05:37:23.943] Recording: run_service_accounts_tests
I1207 05:37:23.943] Running command: run_service_accounts_tests
I1207 05:37:23.962] 
I1207 05:37:23.964] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 77 lines ...
I1207 05:37:31.394]                 job-name=test-job
I1207 05:37:31.395]                 run=pi
I1207 05:37:31.395] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1207 05:37:31.395] Parallelism:    1
I1207 05:37:31.395] Completions:    1
I1207 05:37:31.395] Start Time:     Fri, 07 Dec 2018 05:37:31 +0000
I1207 05:37:31.395] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1207 05:37:31.395] Pod Template:
I1207 05:37:31.395]   Labels:  controller-uid=30ccf61b-f9e2-11e8-9dca-0242ac110002
I1207 05:37:31.395]            job-name=test-job
I1207 05:37:31.395]            run=pi
I1207 05:37:31.395]   Containers:
I1207 05:37:31.395]    pi:
... skipping 328 lines ...
I1207 05:37:41.156]   selector:
I1207 05:37:41.156]     role: padawan
I1207 05:37:41.156]   sessionAffinity: None
I1207 05:37:41.156]   type: ClusterIP
I1207 05:37:41.156] status:
I1207 05:37:41.156]   loadBalancer: {}
W1207 05:37:41.257] error: you must specify resources by --filename when --local is set.
W1207 05:37:41.257] Example resource specifications include:
W1207 05:37:41.257]    '-f rsrc.yaml'
W1207 05:37:41.257]    '--filename=rsrc.json'
I1207 05:37:41.358] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1207 05:37:41.455] core.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1207 05:37:41.531] service "redis-master" deleted
... skipping 90 lines ...
I1207 05:37:47.483]   Volumes:	<none>
I1207 05:37:47.483]  (dry run)
I1207 05:37:47.588] apps.sh:79: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1207 05:37:47.682] apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:37:47.775] apps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 05:37:47.887] daemonset.extensions/bind rolled back
W1207 05:37:47.991] E1207 05:37:47.893249   55520 daemon_controller.go:303] namespace-1544161065-16723/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1544161065-16723", SelfLink:"/apis/apps/v1/namespaces/namespace-1544161065-16723/daemonsets/bind", UID:"39df53ad-f9e2-11e8-9dca-0242ac110002", ResourceVersion:"1348", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63679757866, loc:(*time.Location)(0x66fc920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1544161065-16723\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc003dcc120), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0043065b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003e5bce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc003dcc160), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0025dfef0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc004306630)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
I1207 05:37:48.091] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 05:37:48.098] apps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 05:37:48.221] Successful
I1207 05:37:48.222] message:error: unable to find specified revision 1000000 in history
I1207 05:37:48.222] has:unable to find specified revision
I1207 05:37:48.314] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 05:37:48.417] apps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 05:37:48.534] daemonset.extensions/bind rolled back
W1207 05:37:48.638] E1207 05:37:48.540149   55520 daemon_controller.go:303] namespace-1544161065-16723/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1544161065-16723", SelfLink:"/apis/apps/v1/namespaces/namespace-1544161065-16723/daemonsets/bind", UID:"39df53ad-f9e2-11e8-9dca-0242ac110002", ResourceVersion:"1352", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63679757866, loc:(*time.Location)(0x66fc920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1544161065-16723\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc003ca1560), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00423daa8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003c0f5c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc003ca1620), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00294c388)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00423db20)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
I1207 05:37:48.738] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1207 05:37:48.749] apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:37:48.845] apps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 05:37:48.928] daemonset.extensions "bind" deleted
I1207 05:37:48.945] +++ exit code: 0
I1207 05:37:48.972] Recording: run_rc_tests
... skipping 24 lines ...
I1207 05:37:50.132] Namespace:    namespace-1544161069-15816
I1207 05:37:50.132] Selector:     app=guestbook,tier=frontend
I1207 05:37:50.132] Labels:       app=guestbook
I1207 05:37:50.132]               tier=frontend
I1207 05:37:50.133] Annotations:  <none>
I1207 05:37:50.133] Replicas:     3 current / 3 desired
I1207 05:37:50.133] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:37:50.133] Pod Template:
I1207 05:37:50.133]   Labels:  app=guestbook
I1207 05:37:50.133]            tier=frontend
I1207 05:37:50.133]   Containers:
I1207 05:37:50.133]    php-redis:
I1207 05:37:50.134]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 05:37:50.245] Namespace:    namespace-1544161069-15816
I1207 05:37:50.245] Selector:     app=guestbook,tier=frontend
I1207 05:37:50.245] Labels:       app=guestbook
I1207 05:37:50.245]               tier=frontend
I1207 05:37:50.246] Annotations:  <none>
I1207 05:37:50.246] Replicas:     3 current / 3 desired
I1207 05:37:50.246] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:37:50.246] Pod Template:
I1207 05:37:50.246]   Labels:  app=guestbook
I1207 05:37:50.246]            tier=frontend
I1207 05:37:50.246]   Containers:
I1207 05:37:50.246]    php-redis:
I1207 05:37:50.246]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1207 05:37:50.364] Namespace:    namespace-1544161069-15816
I1207 05:37:50.365] Selector:     app=guestbook,tier=frontend
I1207 05:37:50.365] Labels:       app=guestbook
I1207 05:37:50.365]               tier=frontend
I1207 05:37:50.365] Annotations:  <none>
I1207 05:37:50.365] Replicas:     3 current / 3 desired
I1207 05:37:50.365] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:37:50.365] Pod Template:
I1207 05:37:50.365]   Labels:  app=guestbook
I1207 05:37:50.365]            tier=frontend
I1207 05:37:50.365]   Containers:
I1207 05:37:50.366]    php-redis:
I1207 05:37:50.366]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1207 05:37:50.480] Namespace:    namespace-1544161069-15816
I1207 05:37:50.480] Selector:     app=guestbook,tier=frontend
I1207 05:37:50.480] Labels:       app=guestbook
I1207 05:37:50.480]               tier=frontend
I1207 05:37:50.481] Annotations:  <none>
I1207 05:37:50.481] Replicas:     3 current / 3 desired
I1207 05:37:50.481] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:37:50.481] Pod Template:
I1207 05:37:50.481]   Labels:  app=guestbook
I1207 05:37:50.481]            tier=frontend
I1207 05:37:50.481]   Containers:
I1207 05:37:50.481]    php-redis:
I1207 05:37:50.481]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1207 05:37:50.618] Namespace:    namespace-1544161069-15816
I1207 05:37:50.618] Selector:     app=guestbook,tier=frontend
I1207 05:37:50.619] Labels:       app=guestbook
I1207 05:37:50.619]               tier=frontend
I1207 05:37:50.619] Annotations:  <none>
I1207 05:37:50.619] Replicas:     3 current / 3 desired
I1207 05:37:50.619] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:37:50.619] Pod Template:
I1207 05:37:50.619]   Labels:  app=guestbook
I1207 05:37:50.619]            tier=frontend
I1207 05:37:50.620]   Containers:
I1207 05:37:50.620]    php-redis:
I1207 05:37:50.620]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 05:37:50.723] Namespace:    namespace-1544161069-15816
I1207 05:37:50.723] Selector:     app=guestbook,tier=frontend
I1207 05:37:50.723] Labels:       app=guestbook
I1207 05:37:50.723]               tier=frontend
I1207 05:37:50.723] Annotations:  <none>
I1207 05:37:50.724] Replicas:     3 current / 3 desired
I1207 05:37:50.724] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:37:50.724] Pod Template:
I1207 05:37:50.724]   Labels:  app=guestbook
I1207 05:37:50.724]            tier=frontend
I1207 05:37:50.724]   Containers:
I1207 05:37:50.724]    php-redis:
I1207 05:37:50.724]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 05:37:50.827] Namespace:    namespace-1544161069-15816
I1207 05:37:50.827] Selector:     app=guestbook,tier=frontend
I1207 05:37:50.827] Labels:       app=guestbook
I1207 05:37:50.827]               tier=frontend
I1207 05:37:50.828] Annotations:  <none>
I1207 05:37:50.828] Replicas:     3 current / 3 desired
I1207 05:37:50.828] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:37:50.828] Pod Template:
I1207 05:37:50.828]   Labels:  app=guestbook
I1207 05:37:50.828]            tier=frontend
I1207 05:37:50.828]   Containers:
I1207 05:37:50.828]    php-redis:
I1207 05:37:50.828]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1207 05:37:50.936] Namespace:    namespace-1544161069-15816
I1207 05:37:50.936] Selector:     app=guestbook,tier=frontend
I1207 05:37:50.936] Labels:       app=guestbook
I1207 05:37:50.936]               tier=frontend
I1207 05:37:50.936] Annotations:  <none>
I1207 05:37:50.936] Replicas:     3 current / 3 desired
I1207 05:37:50.936] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:37:50.936] Pod Template:
I1207 05:37:50.936]   Labels:  app=guestbook
I1207 05:37:50.937]            tier=frontend
I1207 05:37:50.937]   Containers:
I1207 05:37:50.937]    php-redis:
I1207 05:37:50.937]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I1207 05:37:51.768] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I1207 05:37:51.858] core.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I1207 05:37:51.948] replicationcontroller/frontend scaled
I1207 05:37:52.041] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I1207 05:37:52.120] replicationcontroller "frontend" deleted
W1207 05:37:52.221] I1207 05:37:51.119872   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161069-15816", Name:"frontend", UID:"3bf45091-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1387", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-gmsc2
W1207 05:37:52.221] error: Expected replicas to be 3, was 2
W1207 05:37:52.222] I1207 05:37:51.680979   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161069-15816", Name:"frontend", UID:"3bf45091-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1393", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-brxx5
W1207 05:37:52.222] I1207 05:37:51.952410   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161069-15816", Name:"frontend", UID:"3bf45091-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-brxx5
W1207 05:37:52.295] I1207 05:37:52.295016   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161069-15816", Name:"redis-master", UID:"3d67448b-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-g5n85
I1207 05:37:52.396] replicationcontroller/redis-master created
I1207 05:37:52.470] replicationcontroller/redis-slave created
W1207 05:37:52.571] I1207 05:37:52.472859   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161069-15816", Name:"redis-slave", UID:"3d825d0f-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-bkgn7
... skipping 36 lines ...
I1207 05:37:54.306] service "expose-test-deployment" deleted
I1207 05:37:54.419] Successful
I1207 05:37:54.419] message:service/expose-test-deployment exposed
I1207 05:37:54.419] has:service/expose-test-deployment exposed
I1207 05:37:54.506] service "expose-test-deployment" deleted
I1207 05:37:54.604] Successful
I1207 05:37:54.604] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 05:37:54.604] See 'kubectl expose -h' for help and examples
I1207 05:37:54.604] has:invalid deployment: no selectors
I1207 05:37:54.693] Successful
I1207 05:37:54.694] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 05:37:54.694] See 'kubectl expose -h' for help and examples
I1207 05:37:54.694] has:invalid deployment: no selectors
I1207 05:37:54.891] deployment.extensions/nginx-deployment created
I1207 05:37:54.990] core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I1207 05:37:55.115] service/nginx-deployment exposed
W1207 05:37:55.216] I1207 05:37:54.893815   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment", UID:"3ef3b5df-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1515", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-659fc6fb to 3
... skipping 23 lines ...
I1207 05:37:56.963] service "frontend" deleted
I1207 05:37:56.969] service "frontend-2" deleted
I1207 05:37:56.976] service "frontend-3" deleted
I1207 05:37:56.983] service "frontend-4" deleted
I1207 05:37:56.990] service "frontend-5" deleted
I1207 05:37:57.085] Successful
I1207 05:37:57.085] message:error: cannot expose a Node
I1207 05:37:57.085] has:cannot expose
I1207 05:37:57.195] Successful
I1207 05:37:57.195] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1207 05:37:57.196] has:metadata.name: Invalid value
I1207 05:37:57.306] Successful
I1207 05:37:57.306] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I1207 05:37:59.321] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 05:37:59.398] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 05:37:59.464] horizontalpodautoscaler.autoscaling "frontend" deleted
W1207 05:37:59.565] I1207 05:37:58.955780   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161069-15816", Name:"frontend", UID:"415f9ca3-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4g26n
W1207 05:37:59.565] I1207 05:37:58.958337   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161069-15816", Name:"frontend", UID:"415f9ca3-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7vngq
W1207 05:37:59.566] I1207 05:37:58.958373   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161069-15816", Name:"frontend", UID:"415f9ca3-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"1635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-597qj
W1207 05:37:59.566] Error: required flag(s) "max" not set
W1207 05:37:59.566] 
W1207 05:37:59.566] 
W1207 05:37:59.566] Examples:
W1207 05:37:59.566]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 05:37:59.566]   kubectl autoscale deployment foo --min=2 --max=10
W1207 05:37:59.566]   
... skipping 54 lines ...
I1207 05:37:59.739]           limits:
I1207 05:37:59.739]             cpu: 300m
I1207 05:37:59.739]           requests:
I1207 05:37:59.739]             cpu: 300m
I1207 05:37:59.739]       terminationGracePeriodSeconds: 0
I1207 05:37:59.739] status: {}
W1207 05:37:59.840] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I1207 05:37:59.940] deployment.extensions/nginx-deployment-resources created
I1207 05:38:00.028] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1207 05:38:00.103] core.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:38:00.178] core.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1207 05:38:00.252] deployment.extensions/nginx-deployment-resources resource requirements updated
I1207 05:38:00.345] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 82 lines ...
W1207 05:38:01.219] I1207 05:38:00.255306   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources", UID:"41f61c5a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W1207 05:38:01.219] I1207 05:38:00.259776   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-6c5996c457", UID:"4226550e-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-46746
W1207 05:38:01.220] I1207 05:38:00.261053   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources", UID:"41f61c5a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W1207 05:38:01.220] I1207 05:38:00.267254   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-69c96fd869", UID:"41f69cef-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1674", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-whgn6
W1207 05:38:01.220] I1207 05:38:00.268085   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources", UID:"41f61c5a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 2
W1207 05:38:01.220] I1207 05:38:00.273482   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-6c5996c457", UID:"4226550e-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1680", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-khrgt
W1207 05:38:01.220] error: unable to find container named redis
W1207 05:38:01.221] I1207 05:38:00.571267   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources", UID:"41f61c5a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 0
W1207 05:38:01.221] I1207 05:38:00.575248   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-69c96fd869", UID:"41f69cef-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-bp7mb
W1207 05:38:01.221] I1207 05:38:00.575462   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-69c96fd869", UID:"41f69cef-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-5htzc
W1207 05:38:01.221] I1207 05:38:00.577185   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources", UID:"41f61c5a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 2
W1207 05:38:01.222] I1207 05:38:00.580314   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-5f4579485f", UID:"4255cf1b-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-84nhp
W1207 05:38:01.222] I1207 05:38:00.582339   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-5f4579485f", UID:"4255cf1b-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-tz7ms
W1207 05:38:01.222] I1207 05:38:00.797604   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources", UID:"41f61c5a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1719", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5f4579485f to 0
W1207 05:38:01.223] I1207 05:38:00.802503   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources", UID:"41f61c5a-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 2
W1207 05:38:01.223] E1207 05:38:00.845560   55520 replica_set.go:450] Sync "namespace-1544161069-15816/nginx-deployment-resources-5f4579485f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-resources-5f4579485f": the object has been modified; please apply your changes to the latest version and try again
W1207 05:38:01.223] I1207 05:38:00.948093   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-5f4579485f", UID:"4255cf1b-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1722", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5f4579485f-tz7ms
W1207 05:38:01.223] I1207 05:38:00.997733   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-5f4579485f", UID:"4255cf1b-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1722", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5f4579485f-84nhp
W1207 05:38:01.224] error: you must specify resources by --filename when --local is set.
W1207 05:38:01.224] Example resource specifications include:
W1207 05:38:01.224]    '-f rsrc.yaml'
W1207 05:38:01.224]    '--filename=rsrc.json'
I1207 05:38:01.324] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1207 05:38:01.324] core.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1207 05:38:01.391] core.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 31 lines ...
I1207 05:38:02.342] has:extensions/v1beta1
I1207 05:38:02.409] Successful
I1207 05:38:02.409] message:apps/v1
I1207 05:38:02.409] has:apps/v1
W1207 05:38:02.509] I1207 05:38:01.346288   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-ff8d89cb6", UID:"42786bd6-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-q5t9n
W1207 05:38:02.510] I1207 05:38:01.446704   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161069-15816", Name:"nginx-deployment-resources-ff8d89cb6", UID:"42786bd6-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-mzxs7
W1207 05:38:02.510] E1207 05:38:01.594446   55520 replica_set.go:450] Sync "namespace-1544161069-15816/nginx-deployment-resources-ff8d89cb6" failed with replicasets.apps "nginx-deployment-resources-ff8d89cb6" not found
W1207 05:38:02.510] I1207 05:38:01.714623   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"test-nginx-extensions", UID:"43049682-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1758", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5b89c6c69f to 1
W1207 05:38:02.511] I1207 05:38:01.718337   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"test-nginx-extensions-5b89c6c69f", UID:"43050706-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1759", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5b89c6c69f-6szqv
W1207 05:38:02.511] I1207 05:38:02.136412   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"test-nginx-apps", UID:"4344ee68-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1772", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-55c9b846cc to 1
W1207 05:38:02.511] I1207 05:38:02.138611   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"test-nginx-apps-55c9b846cc", UID:"434565c5-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1773", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-apps-55c9b846cc-rj5k7
I1207 05:38:02.612] Successful describe rs:
I1207 05:38:02.612] Name:           test-nginx-apps-55c9b846cc
... skipping 3 lines ...
I1207 05:38:02.613]                 pod-template-hash=55c9b846cc
I1207 05:38:02.613] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1207 05:38:02.613]                 deployment.kubernetes.io/max-replicas: 2
I1207 05:38:02.613]                 deployment.kubernetes.io/revision: 1
I1207 05:38:02.613] Controlled By:  Deployment/test-nginx-apps
I1207 05:38:02.613] Replicas:       1 current / 1 desired
I1207 05:38:02.613] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:02.613] Pod Template:
I1207 05:38:02.613]   Labels:  app=test-nginx-apps
I1207 05:38:02.613]            pod-template-hash=55c9b846cc
I1207 05:38:02.614]   Containers:
I1207 05:38:02.614]    nginx:
I1207 05:38:02.614]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 95 lines ...
W1207 05:38:06.097] I1207 05:38:05.677786   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-6f6bb85d9c", UID:"451bcb09-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1901", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-6f6bb85d9c-j4swl
W1207 05:38:06.097] I1207 05:38:05.680754   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx", UID:"451b5205-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1898", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 2
W1207 05:38:06.098] I1207 05:38:05.684626   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-9486b7cb7", UID:"455f3212-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1907", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-77tz5
I1207 05:38:07.071] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:38:07.228] apps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 05:38:07.314] deployment.extensions/nginx rolled back
W1207 05:38:07.415] error: unable to find specified revision 1000000 in history
I1207 05:38:08.395] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 05:38:08.468] deployment.extensions/nginx paused
W1207 05:38:08.569] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I1207 05:38:08.669] deployment.extensions/nginx resumed
I1207 05:38:08.741] deployment.extensions/nginx rolled back
I1207 05:38:08.919]     deployment.kubernetes.io/revision-history: 1,3
W1207 05:38:09.096] error: desired revision (3) is different from the running revision (5)
I1207 05:38:09.234] deployment.extensions/nginx2 created
I1207 05:38:09.318] deployment.extensions "nginx2" deleted
I1207 05:38:09.397] deployment.extensions "nginx" deleted
W1207 05:38:09.498] I1207 05:38:09.238181   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx2", UID:"478059e8-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1938", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-6b58f7cc65 to 3
W1207 05:38:09.498] I1207 05:38:09.241767   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx2-6b58f7cc65", UID:"4780edb1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-6b58f7cc65-94z7r
W1207 05:38:09.498] I1207 05:38:09.244377   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx2-6b58f7cc65", UID:"4780edb1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-6b58f7cc65-q7f5s
... skipping 29 lines ...
W1207 05:38:11.946] I1207 05:38:10.001492   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"47bfcb8e-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1985", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W1207 05:38:11.946] I1207 05:38:10.004578   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-85db47bbdb", UID:"47f564d1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1986", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-smfq8
W1207 05:38:11.947] I1207 05:38:10.008049   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"47bfcb8e-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1985", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 05:38:11.947] I1207 05:38:10.013932   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-646d4f779d", UID:"47c04bdb-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-9sltj
W1207 05:38:11.947] I1207 05:38:10.016388   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"47bfcb8e-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 2
W1207 05:38:11.947] I1207 05:38:10.020153   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-85db47bbdb", UID:"47f564d1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1998", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-cl97v
W1207 05:38:11.947] error: unable to find container named "redis"
W1207 05:38:11.948] I1207 05:38:11.138795   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"47bfcb8e-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1207 05:38:11.948] I1207 05:38:11.143949   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"47bfcb8e-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2022", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 2
W1207 05:38:11.948] I1207 05:38:11.145449   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-646d4f779d", UID:"47c04bdb-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-5wh25
W1207 05:38:11.948] I1207 05:38:11.147524   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-646d4f779d", UID:"47c04bdb-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-pbxvr
W1207 05:38:11.949] I1207 05:38:11.147544   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-dc756cc6", UID:"48a21fa5-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-xl7sn
W1207 05:38:11.949] I1207 05:38:11.150061   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-dc756cc6", UID:"48a21fa5-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-hvg74
... skipping 9 lines ...
I1207 05:38:12.523] deployment.extensions/nginx-deployment env updated
W1207 05:38:12.624] I1207 05:38:12.525603   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"490e84e1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2074", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 1
W1207 05:38:12.624] I1207 05:38:12.528453   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-5b795689cd", UID:"49769ec4-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-82kjm
W1207 05:38:12.624] I1207 05:38:12.531139   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"490e84e1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2074", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 05:38:12.625] I1207 05:38:12.535515   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"490e84e1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2078", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 2
W1207 05:38:12.625] I1207 05:38:12.536515   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-646d4f779d", UID:"490f1147-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2081", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-sd7c7
W1207 05:38:12.625] E1207 05:38:12.536662   55520 replica_set.go:450] Sync "namespace-1544161081-20268/nginx-deployment-5b795689cd" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5b795689cd": the object has been modified; please apply your changes to the latest version and try again
W1207 05:38:12.625] I1207 05:38:12.538925   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-5b795689cd", UID:"49769ec4-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-gpzgp
I1207 05:38:12.726] apps.sh:378: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
I1207 05:38:12.726] apps.sh:380: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
I1207 05:38:12.811] deployment.extensions/nginx-deployment env updated
I1207 05:38:12.911] apps.sh:384: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2
I1207 05:38:12.997] deployment.extensions/nginx-deployment env updated
... skipping 44 lines ...
W1207 05:38:15.420] I1207 05:38:13.099280   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"490e84e1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2129", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5b795689cd to 0
W1207 05:38:15.420] I1207 05:38:13.106287   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"490e84e1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2131", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-65b869c68c to 2
W1207 05:38:15.420] I1207 05:38:13.327408   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"490e84e1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2138", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-794dcdf6bb to 0
W1207 05:38:15.421] I1207 05:38:13.344539   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-5b795689cd", UID:"49769ec4-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2132", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-82kjm
W1207 05:38:15.421] I1207 05:38:13.394968   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment-5b795689cd", UID:"49769ec4-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2132", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-gpzgp
W1207 05:38:15.421] I1207 05:38:13.476051   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161081-20268", Name:"nginx-deployment", UID:"490e84e1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2142", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7b8f7659b7 to 2
W1207 05:38:15.422] E1207 05:38:13.540339   55520 replica_set.go:450] Sync "namespace-1544161081-20268/nginx-deployment-5766b7c95b" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5766b7c95b": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544161081-20268/nginx-deployment-5766b7c95b, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 49a2960f-f9e2-11e8-9dca-0242ac110002, UID in object meta: 
W1207 05:38:15.422] E1207 05:38:13.590800   55520 replica_set.go:450] Sync "namespace-1544161081-20268/nginx-deployment-65b869c68c" failed with replicasets.apps "nginx-deployment-65b869c68c" not found
W1207 05:38:15.423] E1207 05:38:13.741290   55520 replica_set.go:450] Sync "namespace-1544161081-20268/nginx-deployment-794dcdf6bb" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-794dcdf6bb": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544161081-20268/nginx-deployment-794dcdf6bb, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 49bf3710-f9e2-11e8-9dca-0242ac110002, UID in object meta: 
W1207 05:38:15.423] E1207 05:38:13.790043   55520 replica_set.go:450] Sync "namespace-1544161081-20268/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
W1207 05:38:15.423] E1207 05:38:13.890762   55520 replica_set.go:450] Sync "namespace-1544161081-20268/nginx-deployment-5b795689cd" failed with replicasets.apps "nginx-deployment-5b795689cd" not found
W1207 05:38:15.423] I1207 05:38:14.085244   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4a63d414-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2170", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2lzcn
W1207 05:38:15.424] I1207 05:38:14.087904   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4a63d414-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2170", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6wnsd
W1207 05:38:15.424] I1207 05:38:14.090619   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4a63d414-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2170", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-54sft
W1207 05:38:15.424] I1207 05:38:14.102932   55520 horizontal.go:309] Horizontal Pod Autoscaler frontend has been deleted in namespace-1544161069-15816
W1207 05:38:15.424] E1207 05:38:14.290787   55520 replica_set.go:450] Sync "namespace-1544161093-27707/frontend" failed with replicasets.apps "frontend" not found
W1207 05:38:15.425] I1207 05:38:14.475559   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend-no-cascade", UID:"4a9fa477-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-hn989
W1207 05:38:15.425] I1207 05:38:14.478203   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend-no-cascade", UID:"4a9fa477-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-rfpnd
W1207 05:38:15.425] I1207 05:38:14.491382   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend-no-cascade", UID:"4a9fa477-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-plh2s
W1207 05:38:15.426] E1207 05:38:14.739999   55520 replica_set.go:450] Sync "namespace-1544161093-27707/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W1207 05:38:15.426] I1207 05:38:15.317918   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4b202ddc-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2204", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lnqh9
W1207 05:38:15.426] I1207 05:38:15.320496   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4b202ddc-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2204", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xqvm4
W1207 05:38:15.427] I1207 05:38:15.320724   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4b202ddc-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2204", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tdlkz
I1207 05:38:15.527] apps.sh:535: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1207 05:38:15.580] apps.sh:537: Successful describe rs frontend:
I1207 05:38:15.580] Name:         frontend
I1207 05:38:15.580] Namespace:    namespace-1544161093-27707
I1207 05:38:15.581] Selector:     app=guestbook,tier=frontend
I1207 05:38:15.581] Labels:       app=guestbook
I1207 05:38:15.581]               tier=frontend
I1207 05:38:15.581] Annotations:  <none>
I1207 05:38:15.581] Replicas:     3 current / 3 desired
I1207 05:38:15.581] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:15.581] Pod Template:
I1207 05:38:15.581]   Labels:  app=guestbook
I1207 05:38:15.581]            tier=frontend
I1207 05:38:15.581]   Containers:
I1207 05:38:15.581]    php-redis:
I1207 05:38:15.582]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 05:38:15.690] Namespace:    namespace-1544161093-27707
I1207 05:38:15.690] Selector:     app=guestbook,tier=frontend
I1207 05:38:15.690] Labels:       app=guestbook
I1207 05:38:15.690]               tier=frontend
I1207 05:38:15.690] Annotations:  <none>
I1207 05:38:15.690] Replicas:     3 current / 3 desired
I1207 05:38:15.691] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:15.691] Pod Template:
I1207 05:38:15.691]   Labels:  app=guestbook
I1207 05:38:15.691]            tier=frontend
I1207 05:38:15.691]   Containers:
I1207 05:38:15.691]    php-redis:
I1207 05:38:15.691]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1207 05:38:15.796] Namespace:    namespace-1544161093-27707
I1207 05:38:15.796] Selector:     app=guestbook,tier=frontend
I1207 05:38:15.796] Labels:       app=guestbook
I1207 05:38:15.797]               tier=frontend
I1207 05:38:15.797] Annotations:  <none>
I1207 05:38:15.797] Replicas:     3 current / 3 desired
I1207 05:38:15.797] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:15.797] Pod Template:
I1207 05:38:15.797]   Labels:  app=guestbook
I1207 05:38:15.797]            tier=frontend
I1207 05:38:15.797]   Containers:
I1207 05:38:15.797]    php-redis:
I1207 05:38:15.797]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I1207 05:38:15.899] Namespace:    namespace-1544161093-27707
I1207 05:38:15.899] Selector:     app=guestbook,tier=frontend
I1207 05:38:15.900] Labels:       app=guestbook
I1207 05:38:15.900]               tier=frontend
I1207 05:38:15.900] Annotations:  <none>
I1207 05:38:15.900] Replicas:     3 current / 3 desired
I1207 05:38:15.900] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:15.900] Pod Template:
I1207 05:38:15.900]   Labels:  app=guestbook
I1207 05:38:15.900]            tier=frontend
I1207 05:38:15.901]   Containers:
I1207 05:38:15.901]    php-redis:
I1207 05:38:15.901]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1207 05:38:16.027] Namespace:    namespace-1544161093-27707
I1207 05:38:16.027] Selector:     app=guestbook,tier=frontend
I1207 05:38:16.027] Labels:       app=guestbook
I1207 05:38:16.027]               tier=frontend
I1207 05:38:16.027] Annotations:  <none>
I1207 05:38:16.027] Replicas:     3 current / 3 desired
I1207 05:38:16.027] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:16.027] Pod Template:
I1207 05:38:16.027]   Labels:  app=guestbook
I1207 05:38:16.028]            tier=frontend
I1207 05:38:16.028]   Containers:
I1207 05:38:16.028]    php-redis:
I1207 05:38:16.028]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 05:38:16.127] Namespace:    namespace-1544161093-27707
I1207 05:38:16.128] Selector:     app=guestbook,tier=frontend
I1207 05:38:16.128] Labels:       app=guestbook
I1207 05:38:16.128]               tier=frontend
I1207 05:38:16.128] Annotations:  <none>
I1207 05:38:16.128] Replicas:     3 current / 3 desired
I1207 05:38:16.128] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:16.128] Pod Template:
I1207 05:38:16.128]   Labels:  app=guestbook
I1207 05:38:16.128]            tier=frontend
I1207 05:38:16.128]   Containers:
I1207 05:38:16.128]    php-redis:
I1207 05:38:16.128]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 05:38:16.225] Namespace:    namespace-1544161093-27707
I1207 05:38:16.225] Selector:     app=guestbook,tier=frontend
I1207 05:38:16.225] Labels:       app=guestbook
I1207 05:38:16.225]               tier=frontend
I1207 05:38:16.225] Annotations:  <none>
I1207 05:38:16.225] Replicas:     3 current / 3 desired
I1207 05:38:16.225] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:16.225] Pod Template:
I1207 05:38:16.225]   Labels:  app=guestbook
I1207 05:38:16.226]            tier=frontend
I1207 05:38:16.226]   Containers:
I1207 05:38:16.226]    php-redis:
I1207 05:38:16.226]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1207 05:38:16.326] Namespace:    namespace-1544161093-27707
I1207 05:38:16.326] Selector:     app=guestbook,tier=frontend
I1207 05:38:16.326] Labels:       app=guestbook
I1207 05:38:16.326]               tier=frontend
I1207 05:38:16.326] Annotations:  <none>
I1207 05:38:16.326] Replicas:     3 current / 3 desired
I1207 05:38:16.327] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:16.327] Pod Template:
I1207 05:38:16.327]   Labels:  app=guestbook
I1207 05:38:16.327]            tier=frontend
I1207 05:38:16.327]   Containers:
I1207 05:38:16.327]    php-redis:
I1207 05:38:16.327]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 137 lines ...
W1207 05:38:18.307] I1207 05:38:17.809227   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"scale-1-9bdb56f49", UID:"4c0d0791-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2271", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-9bdb56f49-wgb69
W1207 05:38:18.307] I1207 05:38:17.817542   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161093-27707", Name:"scale-2", UID:"4c220127-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2277", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-9bdb56f49 to 3
W1207 05:38:18.307] I1207 05:38:17.820249   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"scale-2-9bdb56f49", UID:"4c2279e1-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2279", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-9bdb56f49-dtzc5
W1207 05:38:18.308] I1207 05:38:17.827774   55520 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544161093-27707", Name:"scale-3", UID:"4c376cbd-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2285", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-9bdb56f49 to 3
W1207 05:38:18.308] I1207 05:38:17.941554   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"scale-3-9bdb56f49", UID:"4c3836bb-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2287", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-9bdb56f49-7zkpc
W1207 05:38:18.308] I1207 05:38:18.041652   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"scale-3-9bdb56f49", UID:"4c3836bb-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2287", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-9bdb56f49-2vwrx
W1207 05:38:18.308] E1207 05:38:18.291016   55520 replica_set.go:450] Sync "namespace-1544161093-27707/scale-2-9bdb56f49" failed with Operation cannot be fulfilled on replicasets.apps "scale-2-9bdb56f49": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544161093-27707/scale-2-9bdb56f49, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 4c2279e1-f9e2-11e8-9dca-0242ac110002, UID in object meta: 
W1207 05:38:18.390] E1207 05:38:18.390362   55520 replica_set.go:450] Sync "namespace-1544161093-27707/scale-3-9bdb56f49" failed with replicasets.apps "scale-3-9bdb56f49" not found
W1207 05:38:18.442] I1207 05:38:18.441580   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4cee6973-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7lspq
W1207 05:38:18.542] I1207 05:38:18.541661   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4cee6973-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vnn7t
W1207 05:38:18.591] I1207 05:38:18.591482   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4cee6973-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nh5xg
I1207 05:38:18.692] replicaset.extensions/frontend created
I1207 05:38:18.692] apps.sh:587: Successful get rs frontend {{.spec.replicas}}: 3
I1207 05:38:18.692] service/frontend exposed
... skipping 35 lines ...
I1207 05:38:20.993] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 05:38:21.073] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 05:38:21.143] horizontalpodautoscaler.autoscaling "frontend" deleted
W1207 05:38:21.244] I1207 05:38:20.582730   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4e439c1c-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2392", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-drvkr
W1207 05:38:21.244] I1207 05:38:20.585140   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4e439c1c-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2392", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-67p4z
W1207 05:38:21.245] I1207 05:38:20.585287   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544161093-27707", Name:"frontend", UID:"4e439c1c-f9e2-11e8-9dca-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2392", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j7pfc
W1207 05:38:21.245] Error: required flag(s) "max" not set
W1207 05:38:21.245] 
W1207 05:38:21.245] 
W1207 05:38:21.245] Examples:
W1207 05:38:21.245]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 05:38:21.245]   kubectl autoscale deployment foo --min=2 --max=10
W1207 05:38:21.246]   
... skipping 88 lines ...
I1207 05:38:24.012] apps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 05:38:24.112] apps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 05:38:24.245] statefulset.apps/nginx rolled back
I1207 05:38:24.357] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 05:38:24.461] apps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 05:38:24.593] Successful
I1207 05:38:24.593] message:error: unable to find specified revision 1000000 in history
I1207 05:38:24.593] has:unable to find specified revision
I1207 05:38:24.710] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 05:38:24.816] apps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 05:38:24.934] statefulset.apps/nginx rolled back
I1207 05:38:25.046] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I1207 05:38:25.147] apps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I1207 05:38:27.567] Name:         mock
I1207 05:38:27.567] Namespace:    namespace-1544161106-2828
I1207 05:38:27.567] Selector:     app=mock
I1207 05:38:27.567] Labels:       app=mock
I1207 05:38:27.568] Annotations:  <none>
I1207 05:38:27.568] Replicas:     1 current / 1 desired
I1207 05:38:27.568] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:27.568] Pod Template:
I1207 05:38:27.569]   Labels:  app=mock
I1207 05:38:27.569]   Containers:
I1207 05:38:27.569]    mock-container:
I1207 05:38:27.569]     Image:        k8s.gcr.io/pause:2.0
I1207 05:38:27.569]     Port:         9949/TCP
... skipping 56 lines ...
I1207 05:38:31.400] Name:         mock
I1207 05:38:31.400] Namespace:    namespace-1544161106-2828
I1207 05:38:31.400] Selector:     app=mock
I1207 05:38:31.400] Labels:       app=mock
I1207 05:38:31.400] Annotations:  <none>
I1207 05:38:31.401] Replicas:     1 current / 1 desired
I1207 05:38:31.401] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:31.401] Pod Template:
I1207 05:38:31.401]   Labels:  app=mock
I1207 05:38:31.401]   Containers:
I1207 05:38:31.401]    mock-container:
I1207 05:38:31.401]     Image:        k8s.gcr.io/pause:2.0
I1207 05:38:31.401]     Port:         9949/TCP
... skipping 56 lines ...
I1207 05:38:33.674] Name:         mock
I1207 05:38:33.674] Namespace:    namespace-1544161106-2828
I1207 05:38:33.674] Selector:     app=mock
I1207 05:38:33.674] Labels:       app=mock
I1207 05:38:33.675] Annotations:  <none>
I1207 05:38:33.675] Replicas:     1 current / 1 desired
I1207 05:38:33.675] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:33.675] Pod Template:
I1207 05:38:33.675]   Labels:  app=mock
I1207 05:38:33.675]   Containers:
I1207 05:38:33.675]    mock-container:
I1207 05:38:33.675]     Image:        k8s.gcr.io/pause:2.0
I1207 05:38:33.676]     Port:         9949/TCP
... skipping 43 lines ...
I1207 05:38:35.947] Namespace:    namespace-1544161106-2828
I1207 05:38:35.947] Selector:     app=mock
I1207 05:38:35.947] Labels:       app=mock
I1207 05:38:35.947]               status=replaced
I1207 05:38:35.948] Annotations:  <none>
I1207 05:38:35.948] Replicas:     1 current / 1 desired
I1207 05:38:35.948] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:35.948] Pod Template:
I1207 05:38:35.948]   Labels:  app=mock
I1207 05:38:35.948]   Containers:
I1207 05:38:35.948]    mock-container:
I1207 05:38:35.948]     Image:        k8s.gcr.io/pause:2.0
I1207 05:38:35.948]     Port:         9949/TCP
... skipping 11 lines ...
I1207 05:38:35.949] Namespace:    namespace-1544161106-2828
I1207 05:38:35.950] Selector:     app=mock2
I1207 05:38:35.950] Labels:       app=mock2
I1207 05:38:35.950]               status=replaced
I1207 05:38:35.950] Annotations:  <none>
I1207 05:38:35.950] Replicas:     1 current / 1 desired
I1207 05:38:35.950] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 05:38:35.950] Pod Template:
I1207 05:38:35.950]   Labels:  app=mock2
I1207 05:38:35.950]   Containers:
I1207 05:38:35.950]    mock-container:
I1207 05:38:35.950]     Image:        k8s.gcr.io/pause:2.0
I1207 05:38:35.951]     Port:         9949/TCP
... skipping 106 lines ...
I1207 05:38:41.802] +++ [1207 05:38:41] Testing persistent volumes
I1207 05:38:41.891] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 05:38:42.085] persistentvolume/pv0001 created
I1207 05:38:42.188] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I1207 05:38:42.279] persistentvolume "pv0001" deleted
W1207 05:38:42.380] I1207 05:38:40.959394   55520 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544161106-2828", Name:"mock", UID:"5a68f0d9-f9e2-11e8-9dca-0242ac110002", APIVersion:"v1", ResourceVersion:"2663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-wmzkb
W1207 05:38:42.381] E1207 05:38:42.091710   55520 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I1207 05:38:42.482] persistentvolume/pv0002 created
I1207 05:38:42.553] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I1207 05:38:42.669] persistentvolume "pv0002" deleted
I1207 05:38:42.867] persistentvolume/pv0003 created
I1207 05:38:42.966] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I1207 05:38:43.066] persistentvolume "pv0003" deleted
... skipping 478 lines ...
I1207 05:38:49.752] yes
I1207 05:38:49.752] has:the server doesn't have a resource type
I1207 05:38:49.847] Successful
I1207 05:38:49.848] message:yes
I1207 05:38:49.848] has:yes
I1207 05:38:49.958] Successful
I1207 05:38:49.959] message:error: --subresource can not be used with NonResourceURL
I1207 05:38:49.959] has:subresource can not be used with NonResourceURL
I1207 05:38:50.070] Successful
I1207 05:38:50.178] Successful
I1207 05:38:50.179] message:yes
I1207 05:38:50.179] 0
I1207 05:38:50.179] has:0
... skipping 6 lines ...
I1207 05:38:50.480] role.rbac.authorization.k8s.io/testing-R reconciled
I1207 05:38:50.602] legacy-script.sh:736: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I1207 05:38:50.714] legacy-script.sh:737: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I1207 05:38:50.830] legacy-script.sh:738: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I1207 05:38:50.950] legacy-script.sh:739: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I1207 05:38:51.077] Successful
I1207 05:38:51.077] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I1207 05:38:51.077] has:only rbac.authorization.k8s.io/v1 is supported
I1207 05:38:51.205] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I1207 05:38:51.211] role.rbac.authorization.k8s.io "testing-R" deleted
I1207 05:38:51.223] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I1207 05:38:51.231] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I1207 05:38:51.239] Recording: run_retrieve_multiple_tests
... skipping 893 lines ...
I1207 05:39:18.436] message:node/127.0.0.1 already uncordoned (dry run)
I1207 05:39:18.436] has:already uncordoned
I1207 05:39:18.516] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I1207 05:39:18.595] node/127.0.0.1 labeled
I1207 05:39:18.678] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I1207 05:39:18.741] Successful
I1207 05:39:18.742] message:error: cannot specify both a node name and a --selector option
I1207 05:39:18.742] See 'kubectl drain -h' for help and examples
I1207 05:39:18.742] has:cannot specify both a node name
I1207 05:39:18.803] Successful
I1207 05:39:18.803] message:error: USAGE: cordon NODE [flags]
I1207 05:39:18.803] See 'kubectl cordon -h' for help and examples
I1207 05:39:18.803] has:error\: USAGE\: cordon NODE
I1207 05:39:18.877] node/127.0.0.1 already uncordoned
I1207 05:39:18.947] Successful
I1207 05:39:18.947] message:error: You must provide one or more resources by argument or filename.
I1207 05:39:18.948] Example resource specifications include:
I1207 05:39:18.948]    '-f rsrc.yaml'
I1207 05:39:18.948]    '--filename=rsrc.json'
I1207 05:39:18.948]    '<resource> <name>'
I1207 05:39:18.948]    '<resource>'
I1207 05:39:18.948] has:must provide one or more resources
... skipping 15 lines ...
I1207 05:39:19.387] Successful
I1207 05:39:19.387] message:The following kubectl-compatible plugins are available:
I1207 05:39:19.387] 
I1207 05:39:19.387] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I1207 05:39:19.387]   - warning: kubectl-version overwrites existing command: "kubectl version"
I1207 05:39:19.387] 
I1207 05:39:19.387] error: one plugin warning was found
I1207 05:39:19.388] has:kubectl-version overwrites existing command: "kubectl version"
I1207 05:39:19.455] Successful
I1207 05:39:19.455] message:The following kubectl-compatible plugins are available:
I1207 05:39:19.455] 
I1207 05:39:19.455] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 05:39:19.456] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I1207 05:39:19.456]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 05:39:19.456] 
I1207 05:39:19.456] error: one plugin warning was found
I1207 05:39:19.456] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I1207 05:39:19.522] Successful
I1207 05:39:19.522] message:The following kubectl-compatible plugins are available:
I1207 05:39:19.523] 
I1207 05:39:19.523] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 05:39:19.523] has:plugins are available
I1207 05:39:19.588] Successful
I1207 05:39:19.588] message:
I1207 05:39:19.589] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I1207 05:39:19.589] error: unable to find any kubectl plugins in your PATH
I1207 05:39:19.589] has:unable to find any kubectl plugins in your PATH
I1207 05:39:19.684] Successful
I1207 05:39:19.685] message:I am plugin foo
I1207 05:39:19.685] has:plugin foo
I1207 05:39:19.750] Successful
I1207 05:39:19.750] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.904+6cb6e6e5babdc9", GitCommit:"6cb6e6e5babdc9f4e0f4bcad04896f3d15397e08", GitTreeState:"clean", BuildDate:"2018-12-07T05:32:17Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I1207 05:39:19.819] 
I1207 05:39:19.820] +++ Running case: test-cmd.run_impersonation_tests 
I1207 05:39:19.822] +++ working dir: /go/src/k8s.io/kubernetes
I1207 05:39:19.825] +++ command: run_impersonation_tests
I1207 05:39:19.834] +++ [1207 05:39:19] Testing impersonation
I1207 05:39:19.898] Successful
I1207 05:39:19.899] message:error: requesting groups or user-extra for  without impersonating a user
I1207 05:39:19.899] has:without impersonating a user
I1207 05:39:20.043] certificatesigningrequest.certificates.k8s.io/foo created
I1207 05:39:20.131] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I1207 05:39:20.213] authorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I1207 05:39:20.290] certificatesigningrequest.certificates.k8s.io "foo" deleted
I1207 05:39:20.438] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 81 lines ...
W1207 05:39:20.932] I1207 05:39:20.922050   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.932] I1207 05:39:20.922052   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.932] I1207 05:39:20.922128   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.932] I1207 05:39:20.922216   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.933] I1207 05:39:20.922226   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.933] I1207 05:39:20.922235   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.933] W1207 05:39:20.922249   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.933] W1207 05:39:20.922266   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.933] I1207 05:39:20.922268   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.933] W1207 05:39:20.922272   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.934] W1207 05:39:20.922266   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.934] W1207 05:39:20.922292   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.934] W1207 05:39:20.922311   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.934] W1207 05:39:20.922313   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.934] W1207 05:39:20.922316   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.934] W1207 05:39:20.922321   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.935] I1207 05:39:20.922323   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.935] W1207 05:39:20.922324   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.935] W1207 05:39:20.922339   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.935] W1207 05:39:20.922355   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.935] W1207 05:39:20.922362   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.936] W1207 05:39:20.922359   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.936] W1207 05:39:20.922365   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.936] W1207 05:39:20.922361   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.936] W1207 05:39:20.922385   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.936] W1207 05:39:20.922394   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.936] W1207 05:39:20.922394   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.937] W1207 05:39:20.922393   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.937] W1207 05:39:20.922400   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.937] W1207 05:39:20.922409   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.937] W1207 05:39:20.922410   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.937] W1207 05:39:20.922436   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.938] W1207 05:39:20.922438   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.938] W1207 05:39:20.922436   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.938] W1207 05:39:20.922444   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.938] W1207 05:39:20.922453   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.938] W1207 05:39:20.922454   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.939] W1207 05:39:20.922467   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.939] W1207 05:39:20.922471   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.939] W1207 05:39:20.922473   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.939] W1207 05:39:20.922471   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.939] W1207 05:39:20.922492   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.939] W1207 05:39:20.922492   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.940] W1207 05:39:20.922505   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.940] W1207 05:39:20.922501   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.940] W1207 05:39:20.922530   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.940] W1207 05:39:20.922530   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.940] W1207 05:39:20.922524   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.941] E1207 05:39:20.922529   52167 controller.go:172] Get https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:6443: connect: connection refused
W1207 05:39:20.941] W1207 05:39:20.922537   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.941] W1207 05:39:20.922543   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.941] W1207 05:39:20.922558   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.941] W1207 05:39:20.922570   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.941] W1207 05:39:20.922564   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.942] W1207 05:39:20.922571   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.942] W1207 05:39:20.922564   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.942] W1207 05:39:20.922589   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.942] W1207 05:39:20.922587   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.942] W1207 05:39:20.922609   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.943] W1207 05:39:20.922613   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.943] W1207 05:39:20.922607   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.943] W1207 05:39:20.922622   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.943] W1207 05:39:20.922622   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.943] W1207 05:39:20.922618   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.944] W1207 05:39:20.922642   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.944] W1207 05:39:20.922643   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.944] W1207 05:39:20.922644   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.944] W1207 05:39:20.922651   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.944] W1207 05:39:20.922658   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.944] W1207 05:39:20.922671   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.945] W1207 05:39:20.922682   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.945] W1207 05:39:20.922677   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.945] W1207 05:39:20.922684   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.945] W1207 05:39:20.922690   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.945] W1207 05:39:20.922709   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.946] W1207 05:39:20.922712   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.946] W1207 05:39:20.922716   52167 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1207 05:39:20.946] I1207 05:39:20.922932   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.946] I1207 05:39:20.922946   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.946] I1207 05:39:20.922957   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.946] I1207 05:39:20.922968   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.946] I1207 05:39:20.923018   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1207 05:39:20.947] I1207 05:39:20.923042   52167 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 72 lines ...
I1207 05:39:52.490] +++ [1207 05:39:52] On try 2, etcd: : http://127.0.0.1:2379
I1207 05:39:52.500] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I1207 05:39:52.503] +++ [1207 05:39:52] Running integration test cases
I1207 05:39:56.649] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1alpha1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1beta1,apps/v1beta2,apps/v1,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I1207 05:39:57.318] +++ [1207 05:39:57] Running tests without code coverage
I1207 05:43:29.270] ok  	k8s.io/kubernetes/test/integration/apimachinery	168.686s
I1207 05:43:29.270] FAIL	k8s.io/kubernetes/test/integration/apiserver	37.751s
I1207 05:43:29.271] [restful] 2018/12/07 05:42:22 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:34287/swaggerapi
I1207 05:43:29.271] [restful] 2018/12/07 05:42:22 log.go:33: [restful/swagger] https://127.0.0.1:34287/swaggerui/ is mapped to folder /swagger-ui/
I1207 05:43:29.271] [restful] 2018/12/07 05:42:24 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:34287/swaggerapi
I1207 05:43:29.271] [restful] 2018/12/07 05:42:24 log.go:33: [restful/swagger] https://127.0.0.1:34287/swaggerui/ is mapped to folder /swagger-ui/
I1207 05:43:29.271] ok  	k8s.io/kubernetes/test/integration/auth	94.766s
I1207 05:43:29.272] [restful] 2018/12/07 05:41:16 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:37597/swaggerapi
... skipping 229 lines ...
I1207 05:52:50.840] [restful] 2018/12/07 05:45:41 log.go:33: [restful/swagger] https://127.0.0.1:34727/swaggerui/ is mapped to folder /swagger-ui/
I1207 05:52:50.840] ok  	k8s.io/kubernetes/test/integration/tls	12.450s
I1207 05:52:50.840] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.316s
I1207 05:52:50.840] ok  	k8s.io/kubernetes/test/integration/volume	93.415s
I1207 05:52:50.841] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	146.941s
I1207 05:52:52.481] +++ [1207 05:52:52] Saved JUnit XML test report to /workspace/artifacts/junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-053957.xml
I1207 05:52:52.484] Makefile:184: recipe for target 'test' failed
I1207 05:52:52.490] +++ [1207 05:52:52] Cleaning up etcd
W1207 05:52:52.591] make[1]: *** [test] Error 1
W1207 05:52:52.591] !!! [1207 05:52:52] Call tree:
W1207 05:52:52.591] !!! [1207 05:52:52]  1: hack/make-rules/test-integration.sh:105 runTests(...)
W1207 05:52:52.633] make: *** [test-integration] Error 1
I1207 05:52:52.733] +++ [1207 05:52:52] Integration test cleanup complete
I1207 05:52:52.733] Makefile:203: recipe for target 'test-integration' failed
W1207 05:52:55.116] Traceback (most recent call last):
W1207 05:52:55.117]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 167, in <module>
W1207 05:52:55.130]     main(ARGS.branch, ARGS.script, ARGS.force, ARGS.prow)
W1207 05:52:55.130]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 136, in main
W1207 05:52:55.130]     check(*cmd)
W1207 05:52:55.130]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W1207 05:52:55.130]     subprocess.check_call(cmd)
W1207 05:52:55.130]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1207 05:52:55.143]     raise CalledProcessError(retcode, cmd)
W1207 05:52:55.144] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181105-ceed87206', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E1207 05:52:55.149] Command failed
I1207 05:52:55.149] process 691 exited with code 1 after 27.1m
E1207 05:52:55.149] FAIL: pull-kubernetes-integration
I1207 05:52:55.150] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1207 05:52:58.264] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1207 05:52:58.306] process 123153 exited with code 0 after 0.1m
I1207 05:52:58.306] Call:  gcloud config get-value account
I1207 05:52:58.593] process 123166 exited with code 0 after 0.0m
I1207 05:52:58.594] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 05:52:58.594] Upload result and artifacts...
I1207 05:52:58.594] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/71384/pull-kubernetes-integration/37827
I1207 05:52:58.594] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/71384/pull-kubernetes-integration/37827/artifacts
W1207 05:53:02.365] CommandException: One or more URLs matched no objects.
E1207 05:53:02.514] Command failed
I1207 05:53:02.514] process 123179 exited with code 1 after 0.1m
W1207 05:53:02.514] Remote dir gs://kubernetes-jenkins/pr-logs/pull/71384/pull-kubernetes-integration/37827/artifacts not exist yet
I1207 05:53:02.515] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/71384/pull-kubernetes-integration/37827/artifacts
I1207 05:53:05.918] process 123324 exited with code 0 after 0.1m
W1207 05:53:05.919] metadata path /workspace/_artifacts/metadata.json does not exist
W1207 05:53:05.919] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...