PRbsalamat: Use runtime.NumCPU() instead of a fixed value for parallel scheduler threads
ResultFAILURE
Tests 1 failed / 119 succeeded
Started2019-02-11 21:54
Elapsed16m7s
Revision
Buildergke-prow-containerd-pool-99179761-cvt8
Refs master:f7c4389b
73934:d0ebeefb
pod84423f78-2e47-11e9-8de8-0a580a6c0524
infra-commit89e68fa6f
pod84423f78-2e47-11e9-8de8-0a580a6c0524
repok8s.io/kubernetes
repo-commit43447e2bbf01317243b5728b59a46d0f23cddc77
repos{u'k8s.io/kubernetes': u'master:f7c4389b793cd6cf0de8d67f2c5db28b3985ad59,73934:d0ebeefbc40e30a96ec9b788353ff6969719fdd9'}

References

PR #73934 Use runtime.NumCPU() instead of a fixed value for parallel scheduler threads

Test Failures


test-cmd run_crd_tests 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=test\-cmd\srun\_crd\_tests$'
/go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 72902 Killed                  while [ ${tries} -lt 10 ]; do
    tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
done
/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 72901 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
!!! [0211 22:07:38] Call tree:
!!! [0211 22:07:38]  1: /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh:443 kube::test::get_object_assert(...)
!!! [0211 22:07:38]  2: /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh:133 run_non_native_resource_tests(...)
!!! [0211 22:07:38]  3: /go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 run_crd_tests(...)
!!! [0211 22:07:38]  4: /go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0211 22:07:38]  5: /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:134 juLog(...)
!!! [0211 22:07:38]  6: /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:517 record_command(...)
!!! [0211 22:07:38]  7: hack/make-rules/test-cmd.sh:109 runTests(...)
				
				Click to see stdout/stderrfrom junit_test-cmd.xml

Filter through log files | View test history on testgrid


Show 119 Passed Tests

Error lines from build-log.txt

... skipping 307 lines ...
W0211 22:04:10.629] I0211 22:04:10.628912   54241 serving.go:311] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0211 22:04:10.629] I0211 22:04:10.628994   54241 server.go:561] external host was not specified, using 172.17.0.2
W0211 22:04:10.630] W0211 22:04:10.629010   54241 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0211 22:04:10.630] I0211 22:04:10.629351   54241 server.go:146] Version: v1.14.0-alpha.2.537+43447e2bbf0131
W0211 22:04:12.297] I0211 22:04:12.297280   54241 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 22:04:12.298] I0211 22:04:12.297315   54241 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 22:04:12.298] E0211 22:04:12.298021   54241 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:12.298] E0211 22:04:12.298106   54241 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:12.299] E0211 22:04:12.298184   54241 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:12.299] E0211 22:04:12.298222   54241 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:12.299] E0211 22:04:12.298248   54241 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:12.300] E0211 22:04:12.298264   54241 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:12.300] I0211 22:04:12.298285   54241 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 22:04:12.300] I0211 22:04:12.298292   54241 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 22:04:12.300] I0211 22:04:12.300257   54241 clientconn.go:551] parsed scheme: ""
W0211 22:04:12.301] I0211 22:04:12.300279   54241 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 22:04:12.301] I0211 22:04:12.300324   54241 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 22:04:12.301] I0211 22:04:12.300479   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 335 lines ...
W0211 22:04:12.846] W0211 22:04:12.846199   54241 genericapiserver.go:330] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0211 22:04:13.295] I0211 22:04:13.295502   54241 clientconn.go:551] parsed scheme: ""
W0211 22:04:13.296] I0211 22:04:13.295543   54241 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 22:04:13.296] I0211 22:04:13.295633   54241 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 22:04:13.297] I0211 22:04:13.295726   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:04:13.297] I0211 22:04:13.296569   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:04:13.829] E0211 22:04:13.828506   54241 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:13.830] E0211 22:04:13.828552   54241 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:13.830] E0211 22:04:13.828605   54241 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:13.830] E0211 22:04:13.828640   54241 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:13.831] E0211 22:04:13.828683   54241 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:13.831] E0211 22:04:13.828719   54241 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 22:04:13.832] I0211 22:04:13.828760   54241 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 22:04:13.832] I0211 22:04:13.828768   54241 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 22:04:13.832] I0211 22:04:13.830831   54241 clientconn.go:551] parsed scheme: ""
W0211 22:04:13.833] I0211 22:04:13.830863   54241 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 22:04:13.833] I0211 22:04:13.830908   54241 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 22:04:13.833] I0211 22:04:13.830955   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 147 lines ...
W0211 22:04:59.244] I0211 22:04:59.131826   57605 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252
W0211 22:04:59.244] I0211 22:04:59.131992   57605 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-controller-manager...
W0211 22:04:59.254] I0211 22:04:59.253629   57605 leaderelection.go:227] successfully acquired lease kube-system/kube-controller-manager
W0211 22:04:59.255] I0211 22:04:59.254608   57605 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"12af499c-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"150", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 2fbbb0391761_129d996d-2e49-11e9-8b60-0242ac110002 became leader
W0211 22:04:59.314] I0211 22:04:59.314182   57605 plugins.go:103] No cloud provider specified.
W0211 22:04:59.315] W0211 22:04:59.315156   57605 controllermanager.go:513] "serviceaccount-token" is disabled because there is no private key
W0211 22:04:59.317] E0211 22:04:59.316595   57605 prometheus.go:138] failed to register depth metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_depth", help: "(Deprecated) Current depth of workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_depth" is not a valid metric name
W0211 22:04:59.317] E0211 22:04:59.317321   57605 prometheus.go:150] failed to register adds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_adds", help: "(Deprecated) Total number of adds handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_adds" is not a valid metric name
W0211 22:04:59.318] E0211 22:04:59.317963   57605 prometheus.go:162] failed to register latency metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_queue_latency", help: "(Deprecated) How long an item stays in workqueuedisruption-recheck before being requested.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_queue_latency" is not a valid metric name
W0211 22:04:59.319] E0211 22:04:59.318645   57605 prometheus.go:174] failed to register work_duration metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_work_duration", help: "(Deprecated) How long processing an item from workqueuedisruption-recheck takes.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_work_duration" is not a valid metric name
W0211 22:04:59.319] E0211 22:04:59.319311   57605 prometheus.go:189] failed to register unfinished_work_seconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_unfinished_work_seconds", help: "(Deprecated) How many seconds of work disruption-recheck has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_unfinished_work_seconds" is not a valid metric name
W0211 22:04:59.320] E0211 22:04:59.320084   57605 prometheus.go:202] failed to register longest_running_processor_microseconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for disruption-recheck been running.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_longest_running_processor_microseconds" is not a valid metric name
W0211 22:04:59.321] E0211 22:04:59.320795   57605 prometheus.go:214] failed to register retries metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_retries", help: "(Deprecated) Total number of retries handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_retries" is not a valid metric name
W0211 22:04:59.321] I0211 22:04:59.321540   57605 controllermanager.go:493] Started "disruption"
W0211 22:04:59.322] I0211 22:04:59.322321   57605 node_lifecycle_controller.go:77] Sending events to api server
W0211 22:04:59.323] E0211 22:04:59.322789   57605 core.go:162] failed to start cloud node lifecycle controller: no cloud provider provided
W0211 22:04:59.323] W0211 22:04:59.322937   57605 controllermanager.go:485] Skipping "cloud-node-lifecycle"
W0211 22:04:59.323] W0211 22:04:59.322996   57605 controllermanager.go:485] Skipping "ttl-after-finished"
W0211 22:04:59.324] I0211 22:04:59.323383   57605 controllermanager.go:493] Started "podgc"
W0211 22:04:59.324] I0211 22:04:59.324198   57605 controllermanager.go:493] Started "deployment"
W0211 22:04:59.325] I0211 22:04:59.324938   57605 controllermanager.go:493] Started "replicaset"
W0211 22:04:59.325] I0211 22:04:59.324969   57605 core.go:172] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
... skipping 34 lines ...
W0211 22:04:59.388] I0211 22:04:59.388339   57605 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
W0211 22:04:59.388] I0211 22:04:59.388585   57605 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
W0211 22:04:59.389] I0211 22:04:59.388827   57605 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
W0211 22:04:59.389] I0211 22:04:59.389089   57605 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
W0211 22:04:59.389] I0211 22:04:59.389347   57605 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
W0211 22:04:59.390] I0211 22:04:59.389587   57605 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0211 22:04:59.390] E0211 22:04:59.389780   57605 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 22:04:59.391] I0211 22:04:59.390006   57605 controllermanager.go:493] Started "resourcequota"
W0211 22:04:59.391] I0211 22:04:59.390037   57605 resource_quota_controller.go:276] Starting resource quota controller
W0211 22:04:59.391] I0211 22:04:59.390570   57605 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0211 22:04:59.391] I0211 22:04:59.390665   57605 resource_quota_monitor.go:301] QuotaMonitor running
W0211 22:04:59.392] I0211 22:04:59.391215   57605 controllermanager.go:493] Started "horizontalpodautoscaling"
W0211 22:04:59.392] I0211 22:04:59.391292   57605 horizontal.go:156] Starting HPA controller
... skipping 14 lines ...
W0211 22:04:59.685] I0211 22:04:59.513872   57605 controllermanager.go:493] Started "garbagecollector"
W0211 22:04:59.685] I0211 22:04:59.514303   57605 garbagecollector.go:130] Starting garbage collector controller
W0211 22:04:59.685] I0211 22:04:59.514334   57605 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 22:04:59.685] I0211 22:04:59.514361   57605 graph_builder.go:308] GraphBuilder running
W0211 22:04:59.686] I0211 22:04:59.515551   57605 controllermanager.go:493] Started "ttl"
W0211 22:04:59.686] I0211 22:04:59.515826   57605 ttl_controller.go:116] Starting TTL controller
W0211 22:04:59.686] E0211 22:04:59.527501   57605 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0211 22:04:59.686] W0211 22:04:59.528181   57605 controllermanager.go:485] Skipping "service"
W0211 22:04:59.686] I0211 22:04:59.523420   57605 controller_utils.go:1021] Waiting for caches to sync for TTL controller
W0211 22:04:59.687] I0211 22:04:59.531006   57605 controllermanager.go:493] Started "serviceaccount"
W0211 22:04:59.687] I0211 22:04:59.531732   57605 controllermanager.go:493] Started "statefulset"
W0211 22:04:59.687] W0211 22:04:59.531917   57605 controllermanager.go:472] "tokencleaner" is disabled
W0211 22:04:59.687] I0211 22:04:59.532630   57605 controllermanager.go:493] Started "persistentvolume-expander"
... skipping 32 lines ...
W0211 22:04:59.693] I0211 22:04:59.547862   57605 controllermanager.go:493] Started "job"
W0211 22:04:59.693] I0211 22:04:59.547993   57605 job_controller.go:143] Starting job controller
W0211 22:04:59.693] I0211 22:04:59.548003   57605 controller_utils.go:1021] Waiting for caches to sync for job controller
W0211 22:04:59.693] I0211 22:04:59.548407   57605 controllermanager.go:493] Started "cronjob"
W0211 22:04:59.694] I0211 22:04:59.548443   57605 cronjob_controller.go:92] Starting CronJob Manager
W0211 22:04:59.694] W0211 22:04:59.548464   57605 controllermanager.go:485] Skipping "csrsigning"
W0211 22:04:59.694] W0211 22:04:59.596991   57605 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0211 22:04:59.694] I0211 22:04:59.627397   57605 controller_utils.go:1028] Caches are synced for GC controller
W0211 22:04:59.694] I0211 22:04:59.648476   57605 controller_utils.go:1028] Caches are synced for endpoint controller
W0211 22:04:59.695] I0211 22:04:59.649112   57605 controller_utils.go:1028] Caches are synced for TTL controller
W0211 22:04:59.695] I0211 22:04:59.650703   57605 controller_utils.go:1028] Caches are synced for certificate controller
W0211 22:04:59.695] I0211 22:04:59.655452   57605 controller_utils.go:1028] Caches are synced for PVC protection controller
W0211 22:04:59.695] I0211 22:04:59.656961   57605 controller_utils.go:1028] Caches are synced for ReplicationController controller
... skipping 48 lines ...
I0211 22:05:00.870] Successful: --output json has correct client info
I0211 22:05:00.878] (BSuccessful: --output json has correct server info
I0211 22:05:00.889] (B+++ [0211 22:05:00] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0211 22:05:01.068] Successful: --client --output json has correct client info
I0211 22:05:01.076] (BSuccessful: --client --output json has no server info
I0211 22:05:01.079] (B+++ [0211 22:05:01] Testing kubectl version: compare json output using additional --short flag
W0211 22:05:01.180] E0211 22:05:00.930550   57605 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 22:05:01.181] I0211 22:05:01.008324   57605 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 22:05:01.181] I0211 22:05:01.109179   57605 controller_utils.go:1028] Caches are synced for garbage collector controller
I0211 22:05:01.281] Successful: --short --output client json info is equal to non short result
I0211 22:05:01.282] (BSuccessful: --short --output server json info is equal to non short result
I0211 22:05:01.282] (B+++ [0211 22:05:01] Testing kubectl version: compare json output with yaml output
I0211 22:05:01.455] Successful: --output json/yaml has identical information
... skipping 44 lines ...
I0211 22:05:04.572] +++ working dir: /go/src/k8s.io/kubernetes
I0211 22:05:04.574] +++ command: run_RESTMapper_evaluation_tests
I0211 22:05:04.586] +++ [0211 22:05:04] Creating namespace namespace-1549922704-10027
I0211 22:05:04.768] namespace/namespace-1549922704-10027 created
I0211 22:05:04.866] Context "test" modified.
I0211 22:05:04.875] +++ [0211 22:05:04] Testing RESTMapper
I0211 22:05:05.029] +++ [0211 22:05:05] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0211 22:05:05.048] +++ exit code: 0
I0211 22:05:05.192] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0211 22:05:05.192] bindings                                                                      true         Binding
I0211 22:05:05.193] componentstatuses                 cs                                          false        ComponentStatus
I0211 22:05:05.193] configmaps                        cm                                          true         ConfigMap
I0211 22:05:05.193] endpoints                         ep                                          true         Endpoints
... skipping 587 lines ...
I0211 22:05:28.935] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:05:29.174] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:05:29.334] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:05:29.578] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:05:29.707] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:05:29.814] (Bpod "valid-pod" force deleted
W0211 22:05:29.915] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0211 22:05:29.915] error: setting 'all' parameter but found a non empty selector. 
W0211 22:05:29.915] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 22:05:30.016] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0211 22:05:30.075] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0211 22:05:30.172] (Bnamespace/test-kubectl-describe-pod created
I0211 22:05:30.283] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0211 22:05:30.449] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0211 22:05:31.531] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0211 22:05:31.669] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0211 22:05:31.830] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0211 22:05:31.954] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0211 22:05:32.162] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:05:32.401] (Bpod/env-test-pod created
W0211 22:05:32.502] error: min-available and max-unavailable cannot be both specified
I0211 22:05:32.603] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0211 22:05:32.603] Name:               env-test-pod
I0211 22:05:32.603] Namespace:          test-kubectl-describe-pod
I0211 22:05:32.603] Priority:           0
I0211 22:05:32.604] PriorityClassName:  <none>
I0211 22:05:32.604] Node:               <none>
... skipping 145 lines ...
I0211 22:05:45.012] replicationcontroller "modified" deleted
W0211 22:05:45.113] I0211 22:05:44.282827   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922739-11430", Name:"modified", UID:"2d863400-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"380", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-qf6z4
I0211 22:05:45.296] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:05:45.490] (Bpod/valid-pod created
I0211 22:05:45.594] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:05:45.790] (BSuccessful
I0211 22:05:45.791] message:Error from server: cannot restore map from string
I0211 22:05:45.791] has:cannot restore map from string
I0211 22:05:45.886] Successful
I0211 22:05:45.887] message:pod/valid-pod patched (no change)
I0211 22:05:45.887] has:patched (no change)
I0211 22:05:45.978] pod/valid-pod patched
W0211 22:05:46.079] E0211 22:05:45.783011   54241 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0211 22:05:46.179] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 22:05:46.206] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
I0211 22:05:46.306] (Bpod/valid-pod patched
I0211 22:05:46.409] core.sh:461: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx2:
I0211 22:05:46.497] (Bpod/valid-pod patched
I0211 22:05:46.611] core.sh:465: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 22:05:46.705] (Bpod/valid-pod patched
I0211 22:05:46.830] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0211 22:05:46.911] (Bpod/valid-pod patched
I0211 22:05:47.010] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0211 22:05:47.193] (Bpod/valid-pod patched
I0211 22:05:47.316] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 22:05:47.523] (B+++ [0211 22:05:47] "kubectl patch with resourceVersion 500" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0211 22:05:47.803] pod "valid-pod" deleted
I0211 22:05:47.812] pod/valid-pod replaced
I0211 22:05:47.915] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0211 22:05:48.080] (BSuccessful
I0211 22:05:48.081] message:error: --grace-period must have --force specified
I0211 22:05:48.081] has:\-\-grace-period must have \-\-force specified
I0211 22:05:48.268] Successful
I0211 22:05:48.268] message:error: --timeout must have --force specified
I0211 22:05:48.268] has:\-\-timeout must have \-\-force specified
W0211 22:05:48.434] W0211 22:05:48.433986   57605 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0211 22:05:48.548] node/node-v1-test created
I0211 22:05:48.686] node/node-v1-test replaced
I0211 22:05:48.801] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0211 22:05:48.889] (Bnode "node-v1-test" deleted
I0211 22:05:48.985] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 22:05:49.299] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 24 lines ...
I0211 22:05:51.321] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:05:51.405] (Bpod "valid-pod" force deleted
I0211 22:05:51.493] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:05:51.499] (B+++ [0211 22:05:51] Creating namespace namespace-1549922751-22570
I0211 22:05:51.566] namespace/namespace-1549922751-22570 created
I0211 22:05:51.645] Context "test" modified.
W0211 22:05:51.745] error: 'name' already has a value (valid-pod), and --overwrite is false
W0211 22:05:51.746] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 22:05:51.847] core.sh:610: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:05:51.935] (Bpod/redis-master created
I0211 22:05:51.939] pod/valid-pod created
I0211 22:05:52.039] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0211 22:05:52.133] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
... skipping 75 lines ...
I0211 22:06:00.159] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0211 22:06:00.162] +++ working dir: /go/src/k8s.io/kubernetes
I0211 22:06:00.165] +++ command: run_kubectl_create_error_tests
I0211 22:06:00.182] +++ [0211 22:06:00] Creating namespace namespace-1549922760-13118
I0211 22:06:00.349] namespace/namespace-1549922760-13118 created
I0211 22:06:00.507] Context "test" modified.
I0211 22:06:00.517] +++ [0211 22:06:00] Testing kubectl create with error
W0211 22:06:00.662] Error: required flag(s) "filename" not set
W0211 22:06:00.663] 
W0211 22:06:00.664] 
W0211 22:06:00.664] Examples:
W0211 22:06:00.664]   # Create a pod using the data in pod.json.
W0211 22:06:00.665]   kubectl create -f ./pod.json
W0211 22:06:00.665]   
... skipping 38 lines ...
W0211 22:06:00.678]   kubectl create -f FILENAME [options]
W0211 22:06:00.679] 
W0211 22:06:00.679] Use "kubectl <command> --help" for more information about a given command.
W0211 22:06:00.679] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0211 22:06:00.679] 
W0211 22:06:00.680] required flag(s) "filename" not set
I0211 22:06:01.804] +++ [0211 22:06:01] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0211 22:06:01.904] kubectl convert is DEPRECATED and will be removed in a future version.
W0211 22:06:01.905] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 22:06:02.005] +++ exit code: 0
I0211 22:06:02.025] Recording: run_kubectl_apply_tests
I0211 22:06:02.025] Running command: run_kubectl_apply_tests
I0211 22:06:02.043] 
... skipping 17 lines ...
I0211 22:06:03.211] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I0211 22:06:04.045] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I0211 22:06:04.138] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:04.290] (Bpod/selector-test-pod created
I0211 22:06:04.392] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 22:06:04.469] (BSuccessful
I0211 22:06:04.470] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 22:06:04.470] has:pods "selector-test-pod-dont-apply" not found
I0211 22:06:04.545] pod "selector-test-pod" deleted
I0211 22:06:04.645] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:04.902] (Bpod/test-pod created (server dry run)
W0211 22:06:05.003] I0211 22:06:03.618126   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922762-23716", Name:"test-deployment-retainkeys", UID:"38afc94f-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set test-deployment-retainkeys-84449f7ff9 to 0
W0211 22:06:05.004] I0211 22:06:03.623092   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922762-23716", Name:"test-deployment-retainkeys-84449f7ff9", UID:"38b28845-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: test-deployment-retainkeys-84449f7ff9-jml6c
... skipping 8 lines ...
W0211 22:06:06.016] I0211 22:06:06.016337   54241 clientconn.go:551] parsed scheme: ""
W0211 22:06:06.017] I0211 22:06:06.016365   54241 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 22:06:06.017] I0211 22:06:06.016406   54241 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 22:06:06.017] I0211 22:06:06.016525   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:06:06.018] I0211 22:06:06.016933   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:06:06.023] I0211 22:06:06.022721   54241 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0211 22:06:06.103] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0211 22:06:06.204] kind.mygroup.example.com/myobj created (server dry run)
I0211 22:06:06.204] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0211 22:06:06.274] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:06.414] (Bpod/a created
I0211 22:06:07.714] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0211 22:06:07.800] (BSuccessful
I0211 22:06:07.800] message:Error from server (NotFound): pods "b" not found
I0211 22:06:07.800] has:pods "b" not found
I0211 22:06:07.957] pod/b created
I0211 22:06:07.968] pod/a pruned
I0211 22:06:09.453] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0211 22:06:09.538] (BSuccessful
I0211 22:06:09.538] message:Error from server (NotFound): pods "a" not found
I0211 22:06:09.538] has:pods "a" not found
I0211 22:06:09.620] pod "b" deleted
I0211 22:06:09.715] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:09.873] (Bpod/a created
I0211 22:06:09.963] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0211 22:06:10.045] (BSuccessful
I0211 22:06:10.045] message:Error from server (NotFound): pods "b" not found
I0211 22:06:10.045] has:pods "b" not found
I0211 22:06:10.192] pod/b created
I0211 22:06:10.276] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0211 22:06:10.362] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0211 22:06:10.432] (Bpod "a" deleted
I0211 22:06:10.436] pod "b" deleted
I0211 22:06:10.594] Successful
I0211 22:06:10.594] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0211 22:06:10.594] has:all resources selected for prune without explicitly passing --all
I0211 22:06:10.731] pod/a created
I0211 22:06:10.736] pod/b created
I0211 22:06:10.743] service/prune-svc created
I0211 22:06:12.035] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0211 22:06:12.114] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 138 lines ...
W0211 22:06:24.056] I0211 22:06:22.600794   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922781-24255", Name:"nginx-apps", UID:"445cf47b-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-apps-8cc6769d to 1
W0211 22:06:24.057] I0211 22:06:22.604292   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922781-24255", Name:"nginx-apps-8cc6769d", UID:"445d95f3-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-apps-8cc6769d-jd76z
W0211 22:06:24.057] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0211 22:06:24.057] I0211 22:06:23.025542   54241 controller.go:606] quota admission added evaluator for: cronjobs.batch
I0211 22:06:24.158] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 22:06:24.173] (BSuccessful
I0211 22:06:24.173] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 22:06:24.174] has:pods "selector-test-pod-dont-apply" not found
I0211 22:06:24.269] pod "selector-test-pod" deleted
I0211 22:06:24.291] +++ exit code: 0
I0211 22:06:24.347] Recording: run_kubectl_apply_deployments_tests
I0211 22:06:24.347] Running command: run_kubectl_apply_deployments_tests
I0211 22:06:24.372] 
... skipping 34 lines ...
I0211 22:06:26.438] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:26.512] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:26.603] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:26.748] (Bdeployment.extensions/nginx created
I0211 22:06:26.840] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0211 22:06:31.025] (BSuccessful
I0211 22:06:31.026] message:Error from server (Conflict): error when applying patch:
I0211 22:06:31.026] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549922784-18291\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0211 22:06:31.026] to:
I0211 22:06:31.027] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0211 22:06:31.027] Name: "nginx", Namespace: "namespace-1549922784-18291"
I0211 22:06:31.028] Object: &{map["metadata":map["name":"nginx" "uid":"46d65e3b-2e49-11e9-a159-0242ac110002" "resourceVersion":"722" "generation":'\x01' "namespace":"namespace-1549922784-18291" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1549922784-18291/deployments/nginx" "creationTimestamp":"2019-02-11T22:06:26Z" "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549922784-18291\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"]] "spec":map["revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']]] "status":map["conditions":[map["lastUpdateTime":"2019-02-11T22:06:26Z" "lastTransitionTime":"2019-02-11T22:06:26Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False"]] "observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03'] "kind":"Deployment" "apiVersion":"extensions/v1beta1"]}
I0211 22:06:31.029] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0211 22:06:31.029] has:Error from server (Conflict)
W0211 22:06:31.130] I0211 22:06:26.750842   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922784-18291", Name:"nginx", UID:"46d65e3b-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0211 22:06:31.130] I0211 22:06:26.754175   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922784-18291", Name:"nginx-776cc67f78", UID:"46d6e1ca-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-fvcpc
W0211 22:06:31.131] I0211 22:06:26.756662   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922784-18291", Name:"nginx-776cc67f78", UID:"46d6e1ca-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-9gg2h
W0211 22:06:31.131] I0211 22:06:26.756776   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922784-18291", Name:"nginx-776cc67f78", UID:"46d6e1ca-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-cshnv
W0211 22:06:35.761] E0211 22:06:35.760987   57605 replica_set.go:450] Sync "namespace-1549922784-18291/nginx-776cc67f78" failed with Operation cannot be fulfilled on replicasets.apps "nginx-776cc67f78": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1549922784-18291/nginx-776cc67f78, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 46d6e1ca-2e49-11e9-a159-0242ac110002, UID in object meta: 
I0211 22:06:36.673] deployment.extensions/nginx configured
W0211 22:06:36.774] I0211 22:06:36.678239   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922784-18291", Name:"nginx", UID:"4cc0a32b-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"744", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7bd4fbc645 to 3
W0211 22:06:36.774] I0211 22:06:36.682720   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922784-18291", Name:"nginx-7bd4fbc645", UID:"4cc15382-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-vv9fp
W0211 22:06:36.775] I0211 22:06:36.686983   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922784-18291", Name:"nginx-7bd4fbc645", UID:"4cc15382-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-s6jqc
W0211 22:06:36.775] I0211 22:06:36.693412   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922784-18291", Name:"nginx-7bd4fbc645", UID:"4cc15382-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-94tlh
I0211 22:06:36.876] Successful
... skipping 141 lines ...
I0211 22:06:44.683] +++ [0211 22:06:44] Creating namespace namespace-1549922804-24695
I0211 22:06:44.774] namespace/namespace-1549922804-24695 created
I0211 22:06:44.874] Context "test" modified.
I0211 22:06:44.880] +++ [0211 22:06:44] Testing kubectl get
I0211 22:06:44.997] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:45.110] (BSuccessful
I0211 22:06:45.111] message:Error from server (NotFound): pods "abc" not found
I0211 22:06:45.111] has:pods "abc" not found
I0211 22:06:45.225] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:45.336] (BSuccessful
I0211 22:06:45.336] message:Error from server (NotFound): pods "abc" not found
I0211 22:06:45.337] has:pods "abc" not found
I0211 22:06:45.447] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:45.563] (BSuccessful
I0211 22:06:45.564] message:{
I0211 22:06:45.564]     "apiVersion": "v1",
I0211 22:06:45.564]     "items": [],
... skipping 23 lines ...
I0211 22:06:46.027] has not:No resources found
I0211 22:06:46.143] Successful
I0211 22:06:46.143] message:NAME
I0211 22:06:46.144] has not:No resources found
I0211 22:06:46.268] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:46.405] (BSuccessful
I0211 22:06:46.406] message:error: the server doesn't have a resource type "foobar"
I0211 22:06:46.406] has not:No resources found
I0211 22:06:46.508] Successful
I0211 22:06:46.508] message:No resources found.
I0211 22:06:46.509] has:No resources found
I0211 22:06:46.619] Successful
I0211 22:06:46.620] message:
I0211 22:06:46.620] has not:No resources found
I0211 22:06:46.731] Successful
I0211 22:06:46.732] message:No resources found.
I0211 22:06:46.732] has:No resources found
I0211 22:06:46.839] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:46.947] (BSuccessful
I0211 22:06:46.947] message:Error from server (NotFound): pods "abc" not found
I0211 22:06:46.948] has:pods "abc" not found
I0211 22:06:46.948] FAIL!
I0211 22:06:46.949] message:Error from server (NotFound): pods "abc" not found
I0211 22:06:46.949] has not:List
I0211 22:06:46.949] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0211 22:06:47.102] Successful
I0211 22:06:47.102] message:I0211 22:06:47.035475   69829 loader.go:359] Config loaded from file /tmp/tmp.XjB0tCGPYk/.kube/config
I0211 22:06:47.103] I0211 22:06:47.037097   69829 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0211 22:06:47.103] I0211 22:06:47.077026   69829 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 653 lines ...
I0211 22:06:51.489] }
I0211 22:06:51.601] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:06:51.918] (B<no value>Successful
I0211 22:06:51.919] message:valid-pod:
I0211 22:06:51.919] has:valid-pod:
I0211 22:06:52.041] Successful
I0211 22:06:52.041] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0211 22:06:52.041] 	template was:
I0211 22:06:52.042] 		{.missing}
I0211 22:06:52.042] 	object given to jsonpath engine was:
I0211 22:06:52.043] 		map[string]interface {}{"spec":map[string]interface {}{"restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname"}}}, "status":map[string]interface {}{"qosClass":"Guaranteed", "phase":"Pending"}, "kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1549922810-21431", "selfLink":"/api/v1/namespaces/namespace-1549922810-21431/pods/valid-pod", "uid":"55753df5-2e49-11e9-a159-0242ac110002", "resourceVersion":"817", "creationTimestamp":"2019-02-11T22:06:51Z"}}
I0211 22:06:52.043] has:missing is not found
W0211 22:06:52.159] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0211 22:06:52.260] Successful
I0211 22:06:52.328] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0211 22:06:52.329] 	template was:
I0211 22:06:52.329] 		{{.missing}}
I0211 22:06:52.330] 	raw data was:
I0211 22:06:52.331] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-02-11T22:06:51Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1549922810-21431","resourceVersion":"817","selfLink":"/api/v1/namespaces/namespace-1549922810-21431/pods/valid-pod","uid":"55753df5-2e49-11e9-a159-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0211 22:06:52.331] 	object given to template engine was:
I0211 22:06:52.332] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-02-11T22:06:51Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1549922810-21431 resourceVersion:817 selfLink:/api/v1/namespaces/namespace-1549922810-21431/pods/valid-pod uid:55753df5-2e49-11e9-a159-0242ac110002] spec:map[priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 87 lines ...
I0211 22:06:55.682]   terminationGracePeriodSeconds: 30
I0211 22:06:55.682] status:
I0211 22:06:55.682]   phase: Pending
I0211 22:06:55.682]   qosClass: Guaranteed
I0211 22:06:55.682] has:name: valid-pod
I0211 22:06:55.684] Successful
I0211 22:06:55.684] message:Error from server (NotFound): pods "invalid-pod" not found
I0211 22:06:55.684] has:"invalid-pod" not found
I0211 22:06:55.796] pod "valid-pod" deleted
I0211 22:06:55.914] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:06:56.105] (Bpod/redis-master created
I0211 22:06:56.109] pod/valid-pod created
I0211 22:06:56.225] Successful
... skipping 232 lines ...
I0211 22:07:01.040] customresourcedefinition.apiextensions.k8s.io/foos.company.com created
I0211 22:07:01.183] old-print.sh:120: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: foos.company.com:
I0211 22:07:01.337] (Bold-print.sh:123: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:01.545] (BSuccessful
I0211 22:07:01.546] message:
I0211 22:07:01.546] has:
W0211 22:07:01.647] E0211 22:07:01.189233   57605 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"]
W0211 22:07:01.647] I0211 22:07:01.322592   54241 clientconn.go:551] parsed scheme: ""
W0211 22:07:01.648] I0211 22:07:01.322627   54241 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 22:07:01.648] I0211 22:07:01.322667   54241 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 22:07:01.648] I0211 22:07:01.322725   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:07:01.648] I0211 22:07:01.324300   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:07:01.649] No resources found.
... skipping 12 lines ...
I0211 22:07:02.500] Running command: run_create_secret_tests
I0211 22:07:02.522] 
I0211 22:07:02.526] +++ Running case: test-cmd.run_create_secret_tests 
I0211 22:07:02.528] +++ working dir: /go/src/k8s.io/kubernetes
I0211 22:07:02.530] +++ command: run_create_secret_tests
I0211 22:07:02.650] Successful
I0211 22:07:02.650] message:Error from server (NotFound): secrets "mysecret" not found
I0211 22:07:02.651] has:secrets "mysecret" not found
I0211 22:07:02.857] Successful
I0211 22:07:02.857] message:Error from server (NotFound): secrets "mysecret" not found
I0211 22:07:02.858] has:secrets "mysecret" not found
I0211 22:07:02.859] Successful
I0211 22:07:02.859] message:user-specified
I0211 22:07:02.860] has:user-specified
I0211 22:07:02.953] Successful
I0211 22:07:03.047] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"5c78b0df-2e49-11e9-a159-0242ac110002","resourceVersion":"893","creationTimestamp":"2019-02-11T22:07:03Z"}}
... skipping 99 lines ...
I0211 22:07:07.263] has:Timeout exceeded while reading body
I0211 22:07:07.358] Successful
I0211 22:07:07.359] message:NAME        READY   STATUS    RESTARTS   AGE
I0211 22:07:07.359] valid-pod   0/1     Pending   0          2s
I0211 22:07:07.359] has:valid-pod
I0211 22:07:07.482] Successful
I0211 22:07:07.483] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0211 22:07:07.483] has:Invalid timeout value
I0211 22:07:07.609] pod "valid-pod" deleted
I0211 22:07:07.629] +++ exit code: 0
I0211 22:07:07.668] Recording: run_crd_tests
I0211 22:07:07.673] Running command: run_crd_tests
I0211 22:07:07.695] 
... skipping 2 lines ...
I0211 22:07:07.702] +++ command: run_crd_tests
I0211 22:07:07.717] +++ [0211 22:07:07] Creating namespace namespace-1549922827-2275
I0211 22:07:07.836] namespace/namespace-1549922827-2275 created
I0211 22:07:07.936] Context "test" modified.
I0211 22:07:07.945] +++ [0211 22:07:07] Testing kubectl crd
I0211 22:07:08.176] customresourcedefinition.apiextensions.k8s.io/foos.company.com created
W0211 22:07:08.276] E0211 22:07:08.175182   54241 autoregister_controller.go:190] v1.company.com failed with : apiservices.apiregistration.k8s.io "v1.company.com" already exists
I0211 22:07:08.377] crd.sh:47: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: foos.company.com:
I0211 22:07:08.524] (Bcustomresourcedefinition.apiextensions.k8s.io/bars.company.com created
I0211 22:07:08.652] crd.sh:69: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:
I0211 22:07:08.837] (Bcustomresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0211 22:07:08.971] crd.sh:96: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\" \"resources.mygroup.example.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:resources.mygroup.example.com:
I0211 22:07:09.221] (Bcustomresourcedefinition.apiextensions.k8s.io/validfoos.company.com created
... skipping 153 lines ...
I0211 22:07:13.509] foo.company.com/test patched
I0211 22:07:13.594] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0211 22:07:13.672] (Bfoo.company.com/test patched
I0211 22:07:13.766] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0211 22:07:13.850] (Bfoo.company.com/test patched
I0211 22:07:13.939] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0211 22:07:14.085] (B+++ [0211 22:07:14] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0211 22:07:14.148] {
I0211 22:07:14.148]     "apiVersion": "company.com/v1",
I0211 22:07:14.148]     "kind": "Foo",
I0211 22:07:14.148]     "metadata": {
I0211 22:07:14.148]         "annotations": {
I0211 22:07:14.149]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
I0211 22:07:16.594] has:bar.company.com/test
I0211 22:07:16.690] bar.company.com "test" deleted
W0211 22:07:16.790] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 72902 Killed                  while [ ${tries} -lt 10 ]; do
W0211 22:07:16.791]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0211 22:07:16.791] done
W0211 22:07:16.791] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 72901 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0211 22:07:31.342] E0211 22:07:31.341439   57605 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos"]
W0211 22:07:31.878] I0211 22:07:31.877975   57605 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 22:07:31.879] I0211 22:07:31.879477   54241 clientconn.go:551] parsed scheme: ""
W0211 22:07:31.880] I0211 22:07:31.879507   54241 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 22:07:31.880] I0211 22:07:31.879549   54241 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 22:07:31.880] I0211 22:07:31.879592   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:07:31.881] I0211 22:07:31.880834   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 53 lines ...
I0211 22:07:37.880] crd.sh:437: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test:
I0211 22:07:37.988] (Bcrd.sh:438: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:38.203] (Bbar.company.com/test created
I0211 22:07:38.210] foo.company.com/test pruned
I0211 22:07:38.335] Waiting for Get foos {{range.items}}{{.metadata.name}}:{{end}} : expected: , got: test:
I0211 22:07:38.338] 
I0211 22:07:38.342] crd.sh:443: FAIL!
I0211 22:07:38.343] Get foos {{range.items}}{{.metadata.name}}:{{end}}
I0211 22:07:38.343]   Expected: 
I0211 22:07:38.343]   Got:      test:
I0211 22:07:38.343] (B
I0211 22:07:38.343] 51 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I0211 22:07:38.343] (B
I0211 22:07:38.394] +++ exit code: 1
I0211 22:07:38.401] +++ error: 1
I0211 22:07:38.458] Error when running run_crd_tests
I0211 22:07:38.458] Recording: run_cmd_with_img_tests
I0211 22:07:38.459] Running command: run_cmd_with_img_tests
I0211 22:07:38.483] 
I0211 22:07:38.485] +++ Running case: test-cmd.run_cmd_with_img_tests 
I0211 22:07:38.489] +++ working dir: /go/src/k8s.io/kubernetes
I0211 22:07:38.492] +++ command: run_cmd_with_img_tests
... skipping 14 lines ...
I0211 22:07:38.905] +++ [0211 22:07:38] Testing cmd with image
I0211 22:07:38.905] Successful
I0211 22:07:38.905] message:deployment.apps/test1 created
I0211 22:07:38.905] has:deployment.apps/test1 created
I0211 22:07:38.905] deployment.extensions "test1" deleted
I0211 22:07:39.001] Successful
I0211 22:07:39.002] message:error: Invalid image name "InvalidImageName": invalid reference format
I0211 22:07:39.002] has:error: Invalid image name "InvalidImageName": invalid reference format
I0211 22:07:39.018] +++ exit code: 0
I0211 22:07:39.074] +++ [0211 22:07:39] Testing recursive resources
I0211 22:07:39.080] +++ [0211 22:07:39] Creating namespace namespace-1549922859-18062
I0211 22:07:39.167] namespace/namespace-1549922859-18062 created
I0211 22:07:39.263] Context "test" modified.
I0211 22:07:39.373] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:39.682] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:39.685] (BSuccessful
I0211 22:07:39.685] message:pod/busybox0 created
I0211 22:07:39.686] pod/busybox1 created
I0211 22:07:39.686] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 22:07:39.686] has:error validating data: kind not set
I0211 22:07:39.805] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:40.037] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0211 22:07:40.040] (BSuccessful
I0211 22:07:40.040] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:40.041] has:Object 'Kind' is missing
I0211 22:07:40.154] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:40.513] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 22:07:40.518] (BSuccessful
I0211 22:07:40.518] message:pod/busybox0 replaced
I0211 22:07:40.519] pod/busybox1 replaced
I0211 22:07:40.519] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 22:07:40.520] has:error validating data: kind not set
I0211 22:07:40.623] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:40.739] (BSuccessful
I0211 22:07:40.740] message:Name:               busybox0
I0211 22:07:40.740] Namespace:          namespace-1549922859-18062
I0211 22:07:40.740] Priority:           0
I0211 22:07:40.740] PriorityClassName:  <none>
... skipping 159 lines ...
I0211 22:07:40.765] has:Object 'Kind' is missing
I0211 22:07:40.850] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:41.054] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0211 22:07:41.057] (BSuccessful
I0211 22:07:41.057] message:pod/busybox0 annotated
I0211 22:07:41.058] pod/busybox1 annotated
I0211 22:07:41.058] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:41.058] has:Object 'Kind' is missing
I0211 22:07:41.157] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:41.463] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 22:07:41.466] (BSuccessful
I0211 22:07:41.466] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 22:07:41.466] pod/busybox0 configured
I0211 22:07:41.466] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 22:07:41.466] pod/busybox1 configured
I0211 22:07:41.467] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 22:07:41.467] has:error validating data: kind not set
I0211 22:07:41.568] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:41.748] (Bdeployment.apps/nginx created
W0211 22:07:41.849] I0211 22:07:41.751771   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922859-18062", Name:"nginx", UID:"738a4a2a-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5f7cff5b56 to 3
W0211 22:07:41.849] I0211 22:07:41.757346   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922859-18062", Name:"nginx-5f7cff5b56", UID:"738afc11-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-ks242
W0211 22:07:41.850] I0211 22:07:41.759783   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922859-18062", Name:"nginx-5f7cff5b56", UID:"738afc11-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-ddk2x
W0211 22:07:41.850] I0211 22:07:41.761355   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922859-18062", Name:"nginx-5f7cff5b56", UID:"738afc11-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-nd75m
... skipping 48 lines ...
W0211 22:07:42.390] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 22:07:42.490] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:42.613] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:42.615] (BSuccessful
I0211 22:07:42.616] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0211 22:07:42.616] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 22:07:42.617] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:42.617] has:Object 'Kind' is missing
I0211 22:07:42.715] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:42.801] (BSuccessful
I0211 22:07:42.802] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:42.802] has:busybox0:busybox1:
I0211 22:07:42.804] Successful
I0211 22:07:42.804] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:42.804] has:Object 'Kind' is missing
I0211 22:07:42.902] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:42.985] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:43.069] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0211 22:07:43.071] (BSuccessful
I0211 22:07:43.071] message:pod/busybox0 labeled
I0211 22:07:43.071] pod/busybox1 labeled
I0211 22:07:43.072] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:43.072] has:Object 'Kind' is missing
I0211 22:07:43.165] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:43.251] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:43.343] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0211 22:07:43.345] (BSuccessful
I0211 22:07:43.346] message:pod/busybox0 patched
I0211 22:07:43.346] pod/busybox1 patched
I0211 22:07:43.346] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:43.346] has:Object 'Kind' is missing
I0211 22:07:43.439] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:43.602] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:43.604] (BSuccessful
I0211 22:07:43.604] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 22:07:43.605] pod "busybox0" force deleted
I0211 22:07:43.605] pod "busybox1" force deleted
I0211 22:07:43.605] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 22:07:43.605] has:Object 'Kind' is missing
I0211 22:07:43.688] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:43.827] (Breplicationcontroller/busybox0 created
I0211 22:07:43.833] replicationcontroller/busybox1 created
I0211 22:07:43.924] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:44.013] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:44.094] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 22:07:44.176] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 22:07:44.340] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 22:07:44.419] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 22:07:44.421] (BSuccessful
I0211 22:07:44.422] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0211 22:07:44.422] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0211 22:07:44.422] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:44.423] has:Object 'Kind' is missing
I0211 22:07:44.497] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0211 22:07:44.574] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0211 22:07:44.672] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:44.755] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 22:07:44.848] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 22:07:45.032] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 22:07:45.124] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 22:07:45.126] (BSuccessful
I0211 22:07:45.126] message:service/busybox0 exposed
I0211 22:07:45.126] service/busybox1 exposed
I0211 22:07:45.127] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:45.127] has:Object 'Kind' is missing
I0211 22:07:45.213] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:45.322] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 22:07:45.426] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 22:07:45.619] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0211 22:07:45.698] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0211 22:07:45.700] (BSuccessful
I0211 22:07:45.701] message:replicationcontroller/busybox0 scaled
I0211 22:07:45.701] replicationcontroller/busybox1 scaled
I0211 22:07:45.701] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:45.701] has:Object 'Kind' is missing
I0211 22:07:45.788] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:45.965] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:45.967] (BSuccessful
I0211 22:07:45.968] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 22:07:45.968] replicationcontroller "busybox0" force deleted
I0211 22:07:45.968] replicationcontroller "busybox1" force deleted
I0211 22:07:45.968] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:45.968] has:Object 'Kind' is missing
I0211 22:07:46.048] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:46.186] (Bdeployment.apps/nginx1-deployment created
I0211 22:07:46.189] deployment.apps/nginx0-deployment created
I0211 22:07:46.285] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0211 22:07:46.372] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 22:07:46.548] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 22:07:46.550] (BSuccessful
I0211 22:07:46.551] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0211 22:07:46.551] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0211 22:07:46.551] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 22:07:46.551] has:Object 'Kind' is missing
I0211 22:07:46.639] deployment.apps/nginx1-deployment paused
I0211 22:07:46.647] deployment.apps/nginx0-deployment paused
I0211 22:07:46.744] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0211 22:07:46.746] (BSuccessful
I0211 22:07:46.746] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 22:07:46.747] has:Object 'Kind' is missing
I0211 22:07:46.833] deployment.apps/nginx1-deployment resumed
I0211 22:07:46.837] deployment.apps/nginx0-deployment resumed
W0211 22:07:46.938] I0211 22:07:43.830702   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922859-18062", Name:"busybox0", UID:"74c7b3f3-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1046", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-dmnpw
W0211 22:07:46.938] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 22:07:46.938] I0211 22:07:43.835585   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922859-18062", Name:"busybox1", UID:"74c88caa-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1048", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-5hst5
W0211 22:07:46.939] I0211 22:07:45.527595   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922859-18062", Name:"busybox0", UID:"74c7b3f3-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1067", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-p67pz
W0211 22:07:46.939] I0211 22:07:45.535576   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922859-18062", Name:"busybox1", UID:"74c88caa-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-mvhjc
W0211 22:07:46.939] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 22:07:46.940] I0211 22:07:46.190517   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922859-18062", Name:"nginx1-deployment", UID:"762fbc86-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7c76c6cbb8 to 2
W0211 22:07:46.940] I0211 22:07:46.193078   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922859-18062", Name:"nginx0-deployment", UID:"76304090-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-7bb85585d7 to 2
W0211 22:07:46.940] I0211 22:07:46.193287   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922859-18062", Name:"nginx1-deployment-7c76c6cbb8", UID:"763040e9-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-jtftn
W0211 22:07:46.941] I0211 22:07:46.196786   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922859-18062", Name:"nginx1-deployment-7c76c6cbb8", UID:"763040e9-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-j29sb
W0211 22:07:46.941] I0211 22:07:46.196824   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922859-18062", Name:"nginx0-deployment-7bb85585d7", UID:"7630bbb8-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1090", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-54w57
W0211 22:07:46.941] I0211 22:07:46.205635   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922859-18062", Name:"nginx0-deployment-7bb85585d7", UID:"7630bbb8-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1090", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-f8tn4
... skipping 7 lines ...
I0211 22:07:47.043] 1         <none>
I0211 22:07:47.043] 
I0211 22:07:47.043] deployment.apps/nginx0-deployment 
I0211 22:07:47.043] REVISION  CHANGE-CAUSE
I0211 22:07:47.043] 1         <none>
I0211 22:07:47.044] 
I0211 22:07:47.044] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 22:07:47.044] has:nginx0-deployment
I0211 22:07:47.046] Successful
I0211 22:07:47.046] message:deployment.apps/nginx1-deployment 
I0211 22:07:47.046] REVISION  CHANGE-CAUSE
I0211 22:07:47.046] 1         <none>
I0211 22:07:47.046] 
I0211 22:07:47.047] deployment.apps/nginx0-deployment 
I0211 22:07:47.047] REVISION  CHANGE-CAUSE
I0211 22:07:47.047] 1         <none>
I0211 22:07:47.047] 
I0211 22:07:47.047] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 22:07:47.047] has:nginx1-deployment
I0211 22:07:47.048] Successful
I0211 22:07:47.048] message:deployment.apps/nginx1-deployment 
I0211 22:07:47.048] REVISION  CHANGE-CAUSE
I0211 22:07:47.048] 1         <none>
I0211 22:07:47.048] 
I0211 22:07:47.049] deployment.apps/nginx0-deployment 
I0211 22:07:47.049] REVISION  CHANGE-CAUSE
I0211 22:07:47.049] 1         <none>
I0211 22:07:47.049] 
I0211 22:07:47.049] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 22:07:47.049] has:Object 'Kind' is missing
I0211 22:07:47.116] deployment.apps "nginx1-deployment" force deleted
I0211 22:07:47.120] deployment.apps "nginx0-deployment" force deleted
W0211 22:07:47.220] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 22:07:47.221] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 22:07:48.225] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:48.397] (Breplicationcontroller/busybox0 created
I0211 22:07:48.405] replicationcontroller/busybox1 created
I0211 22:07:48.510] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 22:07:48.604] (BSuccessful
I0211 22:07:48.604] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0211 22:07:48.606] message:no rollbacker has been implemented for "ReplicationController"
I0211 22:07:48.606] no rollbacker has been implemented for "ReplicationController"
I0211 22:07:48.606] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:48.607] has:Object 'Kind' is missing
I0211 22:07:48.700] Successful
I0211 22:07:48.701] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:48.701] error: replicationcontrollers "busybox0" pausing is not supported
I0211 22:07:48.701] error: replicationcontrollers "busybox1" pausing is not supported
I0211 22:07:48.701] has:Object 'Kind' is missing
I0211 22:07:48.702] Successful
I0211 22:07:48.703] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:48.703] error: replicationcontrollers "busybox0" pausing is not supported
I0211 22:07:48.703] error: replicationcontrollers "busybox1" pausing is not supported
I0211 22:07:48.703] has:replicationcontrollers "busybox0" pausing is not supported
I0211 22:07:48.705] Successful
I0211 22:07:48.705] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:48.706] error: replicationcontrollers "busybox0" pausing is not supported
I0211 22:07:48.706] error: replicationcontrollers "busybox1" pausing is not supported
I0211 22:07:48.706] has:replicationcontrollers "busybox1" pausing is not supported
W0211 22:07:48.807] I0211 22:07:48.400552   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922859-18062", Name:"busybox0", UID:"7780fff1-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-wtnxm
W0211 22:07:48.807] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 22:07:48.808] I0211 22:07:48.408016   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922859-18062", Name:"busybox1", UID:"7781aa31-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-fvvz9
W0211 22:07:48.887] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 22:07:48.905] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:49.006] Successful
I0211 22:07:49.006] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:49.007] error: replicationcontrollers "busybox0" resuming is not supported
I0211 22:07:49.007] error: replicationcontrollers "busybox1" resuming is not supported
I0211 22:07:49.007] has:Object 'Kind' is missing
I0211 22:07:49.007] Successful
I0211 22:07:49.008] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:49.008] error: replicationcontrollers "busybox0" resuming is not supported
I0211 22:07:49.008] error: replicationcontrollers "busybox1" resuming is not supported
I0211 22:07:49.008] has:replicationcontrollers "busybox0" resuming is not supported
I0211 22:07:49.009] Successful
I0211 22:07:49.009] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 22:07:49.009] error: replicationcontrollers "busybox0" resuming is not supported
I0211 22:07:49.010] error: replicationcontrollers "busybox1" resuming is not supported
I0211 22:07:49.010] has:replicationcontrollers "busybox0" resuming is not supported
I0211 22:07:49.010] replicationcontroller "busybox0" force deleted
I0211 22:07:49.010] replicationcontroller "busybox1" force deleted
I0211 22:07:49.911] Recording: run_namespace_tests
I0211 22:07:49.911] Running command: run_namespace_tests
I0211 22:07:49.930] 
... skipping 3 lines ...
I0211 22:07:49.944] +++ [0211 22:07:49] Testing kubectl(v1:namespaces)
I0211 22:07:50.010] namespace/my-namespace created
I0211 22:07:50.098] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0211 22:07:50.177] (Bnamespace "my-namespace" deleted
I0211 22:07:55.323] namespace/my-namespace condition met
I0211 22:07:55.415] Successful
I0211 22:07:55.416] message:Error from server (NotFound): namespaces "my-namespace" not found
I0211 22:07:55.416] has: not found
I0211 22:07:55.526] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0211 22:07:55.603] (Bnamespace/other created
I0211 22:07:55.693] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0211 22:07:55.785] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:55.939] (Bpod/valid-pod created
I0211 22:07:56.040] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:07:56.131] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:07:56.207] (BSuccessful
I0211 22:07:56.207] message:error: a resource cannot be retrieved by name across all namespaces
I0211 22:07:56.207] has:a resource cannot be retrieved by name across all namespaces
I0211 22:07:56.304] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 22:07:56.387] (Bpod "valid-pod" force deleted
I0211 22:07:56.481] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:07:56.554] (Bnamespace "other" deleted
W0211 22:07:56.654] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 115 lines ...
I0211 22:08:17.366] +++ command: run_client_config_tests
I0211 22:08:17.381] +++ [0211 22:08:17] Creating namespace namespace-1549922897-13333
I0211 22:08:17.458] namespace/namespace-1549922897-13333 created
I0211 22:08:17.525] Context "test" modified.
I0211 22:08:17.533] +++ [0211 22:08:17] Testing client config
I0211 22:08:17.608] Successful
I0211 22:08:17.608] message:error: stat missing: no such file or directory
I0211 22:08:17.608] has:missing: no such file or directory
I0211 22:08:17.684] Successful
I0211 22:08:17.684] message:error: stat missing: no such file or directory
I0211 22:08:17.684] has:missing: no such file or directory
I0211 22:08:17.765] Successful
I0211 22:08:17.766] message:error: stat missing: no such file or directory
I0211 22:08:17.766] has:missing: no such file or directory
I0211 22:08:17.833] Successful
I0211 22:08:17.833] message:Error in configuration: context was not found for specified context: missing-context
I0211 22:08:17.833] has:context was not found for specified context: missing-context
I0211 22:08:17.902] Successful
I0211 22:08:17.902] message:error: no server found for cluster "missing-cluster"
I0211 22:08:17.902] has:no server found for cluster "missing-cluster"
I0211 22:08:17.977] Successful
I0211 22:08:17.977] message:error: auth info "missing-user" does not exist
I0211 22:08:17.978] has:auth info "missing-user" does not exist
I0211 22:08:18.117] Successful
I0211 22:08:18.117] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0211 22:08:18.117] has:Error loading config file
I0211 22:08:18.184] Successful
I0211 22:08:18.184] message:error: stat missing-config: no such file or directory
I0211 22:08:18.184] has:no such file or directory
I0211 22:08:18.200] +++ exit code: 0
I0211 22:08:18.242] Recording: run_service_accounts_tests
I0211 22:08:18.242] Running command: run_service_accounts_tests
I0211 22:08:18.264] 
I0211 22:08:18.266] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 35 lines ...
I0211 22:08:25.325] Labels:                        run=pi
I0211 22:08:25.325] Annotations:                   <none>
I0211 22:08:25.325] Schedule:                      59 23 31 2 *
I0211 22:08:25.325] Concurrency Policy:            Allow
I0211 22:08:25.326] Suspend:                       False
I0211 22:08:25.326] Successful Job History Limit:  824641192712
I0211 22:08:25.326] Failed Job History Limit:      1
I0211 22:08:25.326] Starting Deadline Seconds:     <unset>
I0211 22:08:25.326] Selector:                      <unset>
I0211 22:08:25.326] Parallelism:                   <unset>
I0211 22:08:25.327] Completions:                   <unset>
I0211 22:08:25.327] Pod Template:
I0211 22:08:25.327]   Labels:  run=pi
... skipping 31 lines ...
I0211 22:08:25.827]                 job-name=test-job
I0211 22:08:25.827]                 run=pi
I0211 22:08:25.827] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0211 22:08:25.827] Parallelism:    1
I0211 22:08:25.827] Completions:    1
I0211 22:08:25.827] Start Time:     Mon, 11 Feb 2019 22:08:25 +0000
I0211 22:08:25.828] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0211 22:08:25.828] Pod Template:
I0211 22:08:25.828]   Labels:  controller-uid=8dab0d34-2e49-11e9-a159-0242ac110002
I0211 22:08:25.828]            job-name=test-job
I0211 22:08:25.828]            run=pi
I0211 22:08:25.828]   Containers:
I0211 22:08:25.828]    pi:
... skipping 329 lines ...
I0211 22:08:37.317]   selector:
I0211 22:08:37.317]     role: padawan
I0211 22:08:37.317]   sessionAffinity: None
I0211 22:08:37.317]   type: ClusterIP
I0211 22:08:37.317] status:
I0211 22:08:37.317]   loadBalancer: {}
W0211 22:08:37.418] error: you must specify resources by --filename when --local is set.
W0211 22:08:37.418] Example resource specifications include:
W0211 22:08:37.418]    '-f rsrc.yaml'
W0211 22:08:37.418]    '--filename=rsrc.json'
I0211 22:08:37.519] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0211 22:08:37.634] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0211 22:08:37.714] (Bservice "redis-master" deleted
... skipping 92 lines ...
I0211 22:08:45.195] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 22:08:45.278] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0211 22:08:45.372] (Bdaemonset.extensions/bind rolled back
I0211 22:08:45.463] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 22:08:45.554] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 22:08:45.662] (BSuccessful
I0211 22:08:45.662] message:error: unable to find specified revision 1000000 in history
I0211 22:08:45.662] has:unable to find specified revision
I0211 22:08:45.745] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 22:08:45.835] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 22:08:45.941] (Bdaemonset.extensions/bind rolled back
I0211 22:08:46.037] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0211 22:08:46.134] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0211 22:08:47.628] Namespace:    namespace-1549922926-29282
I0211 22:08:47.628] Selector:     app=guestbook,tier=frontend
I0211 22:08:47.628] Labels:       app=guestbook
I0211 22:08:47.628]               tier=frontend
I0211 22:08:47.628] Annotations:  <none>
I0211 22:08:47.628] Replicas:     3 current / 3 desired
I0211 22:08:47.629] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:08:47.629] Pod Template:
I0211 22:08:47.629]   Labels:  app=guestbook
I0211 22:08:47.629]            tier=frontend
I0211 22:08:47.629]   Containers:
I0211 22:08:47.629]    php-redis:
I0211 22:08:47.629]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 22:08:47.750] Namespace:    namespace-1549922926-29282
I0211 22:08:47.751] Selector:     app=guestbook,tier=frontend
I0211 22:08:47.751] Labels:       app=guestbook
I0211 22:08:47.751]               tier=frontend
I0211 22:08:47.751] Annotations:  <none>
I0211 22:08:47.751] Replicas:     3 current / 3 desired
I0211 22:08:47.751] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:08:47.751] Pod Template:
I0211 22:08:47.751]   Labels:  app=guestbook
I0211 22:08:47.752]            tier=frontend
I0211 22:08:47.752]   Containers:
I0211 22:08:47.752]    php-redis:
I0211 22:08:47.752]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0211 22:08:47.753]   ----    ------            ----  ----                    -------
I0211 22:08:47.753]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-gfdt2
I0211 22:08:47.754]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-28r76
I0211 22:08:47.754]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-d7qn6
I0211 22:08:47.754] (B
W0211 22:08:47.854] I0211 22:08:42.831101   54241 controller.go:606] quota admission added evaluator for: daemonsets.extensions
W0211 22:08:47.858] E0211 22:08:45.953832   57605 daemon_controller.go:302] namespace-1549922923-6739/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1549922923-6739", SelfLink:"/apis/apps/v1/namespaces/namespace-1549922923-6739/daemonsets/bind", UID:"989e190c-2e49-11e9-a159-0242ac110002", ResourceVersion:"1359", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685519723, loc:(*time.Location)(0x69f3f40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1549922923-6739\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"4"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002e9ef20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00448c5d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004604540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc002e9ef80), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc004356d38)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00448c650)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0211 22:08:47.858] I0211 22:08:46.901966   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"frontend", UID:"9a5f6981-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ft9wc
W0211 22:08:47.858] I0211 22:08:46.904164   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"frontend", UID:"9a5f6981-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jrx8z
W0211 22:08:47.859] I0211 22:08:46.904818   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"frontend", UID:"9a5f6981-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vwttj
W0211 22:08:47.859] I0211 22:08:47.380659   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"frontend", UID:"9aa8b343-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1385", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gfdt2
W0211 22:08:47.859] I0211 22:08:47.382560   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"frontend", UID:"9aa8b343-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1385", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-28r76
W0211 22:08:47.860] I0211 22:08:47.382904   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"frontend", UID:"9aa8b343-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1385", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-d7qn6
... skipping 2 lines ...
I0211 22:08:47.961] Namespace:    namespace-1549922926-29282
I0211 22:08:47.961] Selector:     app=guestbook,tier=frontend
I0211 22:08:47.961] Labels:       app=guestbook
I0211 22:08:47.961]               tier=frontend
I0211 22:08:47.961] Annotations:  <none>
I0211 22:08:47.961] Replicas:     3 current / 3 desired
I0211 22:08:47.961] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:08:47.962] Pod Template:
I0211 22:08:47.962]   Labels:  app=guestbook
I0211 22:08:47.962]            tier=frontend
I0211 22:08:47.962]   Containers:
I0211 22:08:47.962]    php-redis:
I0211 22:08:47.962]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0211 22:08:47.984] Namespace:    namespace-1549922926-29282
I0211 22:08:47.985] Selector:     app=guestbook,tier=frontend
I0211 22:08:47.985] Labels:       app=guestbook
I0211 22:08:47.985]               tier=frontend
I0211 22:08:47.985] Annotations:  <none>
I0211 22:08:47.985] Replicas:     3 current / 3 desired
I0211 22:08:47.985] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:08:47.985] Pod Template:
I0211 22:08:47.986]   Labels:  app=guestbook
I0211 22:08:47.986]            tier=frontend
I0211 22:08:47.986]   Containers:
I0211 22:08:47.986]    php-redis:
I0211 22:08:47.986]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0211 22:08:48.133] Namespace:    namespace-1549922926-29282
I0211 22:08:48.133] Selector:     app=guestbook,tier=frontend
I0211 22:08:48.133] Labels:       app=guestbook
I0211 22:08:48.134]               tier=frontend
I0211 22:08:48.134] Annotations:  <none>
I0211 22:08:48.134] Replicas:     3 current / 3 desired
I0211 22:08:48.134] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:08:48.134] Pod Template:
I0211 22:08:48.134]   Labels:  app=guestbook
I0211 22:08:48.134]            tier=frontend
I0211 22:08:48.134]   Containers:
I0211 22:08:48.134]    php-redis:
I0211 22:08:48.134]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 22:08:48.260] Namespace:    namespace-1549922926-29282
I0211 22:08:48.260] Selector:     app=guestbook,tier=frontend
I0211 22:08:48.260] Labels:       app=guestbook
I0211 22:08:48.261]               tier=frontend
I0211 22:08:48.261] Annotations:  <none>
I0211 22:08:48.261] Replicas:     3 current / 3 desired
I0211 22:08:48.261] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:08:48.261] Pod Template:
I0211 22:08:48.261]   Labels:  app=guestbook
I0211 22:08:48.261]            tier=frontend
I0211 22:08:48.262]   Containers:
I0211 22:08:48.262]    php-redis:
I0211 22:08:48.262]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 22:08:48.362] Namespace:    namespace-1549922926-29282
I0211 22:08:48.363] Selector:     app=guestbook,tier=frontend
I0211 22:08:48.363] Labels:       app=guestbook
I0211 22:08:48.363]               tier=frontend
I0211 22:08:48.363] Annotations:  <none>
I0211 22:08:48.363] Replicas:     3 current / 3 desired
I0211 22:08:48.363] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:08:48.363] Pod Template:
I0211 22:08:48.364]   Labels:  app=guestbook
I0211 22:08:48.364]            tier=frontend
I0211 22:08:48.364]   Containers:
I0211 22:08:48.364]    php-redis:
I0211 22:08:48.364]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0211 22:08:48.473] Namespace:    namespace-1549922926-29282
I0211 22:08:48.473] Selector:     app=guestbook,tier=frontend
I0211 22:08:48.473] Labels:       app=guestbook
I0211 22:08:48.473]               tier=frontend
I0211 22:08:48.473] Annotations:  <none>
I0211 22:08:48.473] Replicas:     3 current / 3 desired
I0211 22:08:48.473] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:08:48.474] Pod Template:
I0211 22:08:48.474]   Labels:  app=guestbook
I0211 22:08:48.474]            tier=frontend
I0211 22:08:48.474]   Containers:
I0211 22:08:48.474]    php-redis:
I0211 22:08:48.474]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0211 22:08:49.306] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0211 22:08:49.399] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0211 22:08:49.478] (Breplicationcontroller/frontend scaled
I0211 22:08:49.569] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0211 22:08:49.644] (Breplicationcontroller "frontend" deleted
W0211 22:08:49.744] I0211 22:08:48.646771   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"frontend", UID:"9aa8b343-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-d7qn6
W0211 22:08:49.745] error: Expected replicas to be 3, was 2
W0211 22:08:49.745] I0211 22:08:49.211408   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"frontend", UID:"9aa8b343-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1401", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7g9j6
W0211 22:08:49.745] I0211 22:08:49.483277   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"frontend", UID:"9aa8b343-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7g9j6
W0211 22:08:49.835] I0211 22:08:49.835088   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549922926-29282", Name:"redis-master", UID:"9c1f4f53-2e49-11e9-a159-0242ac110002", APIVersion:"v1", ResourceVersion:"1417", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-csvnk
I0211 22:08:49.936] replicationcontroller/redis-master created
I0211 22:08:50.005] replicationcontroller/redis-slave created
I0211 22:08:50.097] replicationcontroller/redis-master scaled
... skipping 29 lines ...
I0211 22:08:51.423] service "expose-test-deployment" deleted
I0211 22:08:51.514] Successful
I0211 22:08:51.514] message:service/expose-test-deployment exposed
I0211 22:08:51.514] has:service/expose-test-deployment exposed
I0211 22:08:51.586] service "expose-test-deployment" deleted
I0211 22:08:51.665] Successful
I0211 22:08:51.665] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0211 22:08:51.665] See 'kubectl expose -h' for help and examples
I0211 22:08:51.665] has:invalid deployment: no selectors
I0211 22:08:51.744] Successful
I0211 22:08:51.744] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0211 22:08:51.744] See 'kubectl expose -h' for help and examples
I0211 22:08:51.745] has:invalid deployment: no selectors
W0211 22:08:51.845] I0211 22:08:50.864850   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment", UID:"9cbc5ace-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1472", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64bb598779 to 3
W0211 22:08:51.846] I0211 22:08:50.867987   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-64bb598779", UID:"9cbce79c-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-r4wls
W0211 22:08:51.846] I0211 22:08:50.870448   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-64bb598779", UID:"9cbce79c-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-rqsc9
W0211 22:08:51.846] I0211 22:08:50.870567   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-64bb598779", UID:"9cbce79c-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-msvbt
... skipping 32 lines ...
I0211 22:08:54.110] service "frontend" deleted
I0211 22:08:54.116] service "frontend-2" deleted
I0211 22:08:54.122] service "frontend-3" deleted
I0211 22:08:54.127] service "frontend-4" deleted
I0211 22:08:54.133] service "frontend-5" deleted
I0211 22:08:54.218] Successful
I0211 22:08:54.218] message:error: cannot expose a Node
I0211 22:08:54.218] has:cannot expose
I0211 22:08:54.295] Successful
I0211 22:08:54.295] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0211 22:08:54.295] has:metadata.name: Invalid value
I0211 22:08:54.377] Successful
I0211 22:08:54.377] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0211 22:08:56.486] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 22:08:56.552] core.sh:1233: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0211 22:08:56.621] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0211 22:08:56.713] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 22:08:56.802] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0211 22:08:56.878] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0211 22:08:56.979] Error: required flag(s) "max" not set
W0211 22:08:56.979] 
W0211 22:08:56.979] 
W0211 22:08:56.979] Examples:
W0211 22:08:56.979]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0211 22:08:56.980]   kubectl autoscale deployment foo --min=2 --max=10
W0211 22:08:56.980]   
... skipping 54 lines ...
I0211 22:08:57.191]           limits:
I0211 22:08:57.191]             cpu: 300m
I0211 22:08:57.191]           requests:
I0211 22:08:57.191]             cpu: 300m
I0211 22:08:57.191]       terminationGracePeriodSeconds: 0
I0211 22:08:57.191] status: {}
W0211 22:08:57.292] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0211 22:08:57.422] deployment.apps/nginx-deployment-resources created
I0211 22:08:57.519] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0211 22:08:57.612] (Bcore.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 22:08:57.703] (Bcore.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 22:08:57.798] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0211 22:08:57.896] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 2 lines ...
W0211 22:08:58.278] I0211 22:08:57.426068   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources", UID:"a0a5763c-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1663", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-695c766d58 to 3
W0211 22:08:58.278] I0211 22:08:57.429364   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources-695c766d58", UID:"a0a600f8-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-xg9lp
W0211 22:08:58.279] I0211 22:08:57.432376   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources-695c766d58", UID:"a0a600f8-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-mz8v9
W0211 22:08:58.279] I0211 22:08:57.432711   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources-695c766d58", UID:"a0a600f8-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-7xp4n
W0211 22:08:58.279] I0211 22:08:57.801798   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources", UID:"a0a5763c-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5b7fc6dd8b to 1
W0211 22:08:58.280] I0211 22:08:57.808814   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"a0df498b-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1678", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5b7fc6dd8b-gnqv4
W0211 22:08:58.280] error: unable to find container named redis
W0211 22:08:58.281] I0211 22:08:58.187812   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources", UID:"a0a5763c-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5b7fc6dd8b to 0
W0211 22:08:58.281] I0211 22:08:58.192870   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"a0df498b-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5b7fc6dd8b-gnqv4
W0211 22:08:58.281] I0211 22:08:58.196626   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources", UID:"a0a5763c-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1689", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6bc4567bf6 to 1
W0211 22:08:58.282] I0211 22:08:58.203803   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922926-29282", Name:"nginx-deployment-resources-6bc4567bf6", UID:"a1192e7d-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6bc4567bf6-sc64m
I0211 22:08:58.382] core.sh:1263: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0211 22:08:58.383] (Bcore.sh:1264: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 79 lines ...
I0211 22:08:58.859]     status: "True"
I0211 22:08:58.860]     type: Progressing
I0211 22:08:58.860]   observedGeneration: 4
I0211 22:08:58.860]   replicas: 4
I0211 22:08:58.860]   unavailableReplicas: 4
I0211 22:08:58.860]   updatedReplicas: 1
W0211 22:08:58.960] error: you must specify resources by --filename when --local is set.
W0211 22:08:58.961] Example resource specifications include:
W0211 22:08:58.961]    '-f rsrc.yaml'
W0211 22:08:58.961]    '--filename=rsrc.json'
I0211 22:08:59.062] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0211 22:08:59.123] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0211 22:08:59.217] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0211 22:09:00.793]                 pod-template-hash=7875bf5c8b
I0211 22:09:00.793] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0211 22:09:00.793]                 deployment.kubernetes.io/max-replicas: 2
I0211 22:09:00.793]                 deployment.kubernetes.io/revision: 1
I0211 22:09:00.793] Controlled By:  Deployment/test-nginx-apps
I0211 22:09:00.794] Replicas:       1 current / 1 desired
I0211 22:09:00.794] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:00.794] Pod Template:
I0211 22:09:00.794]   Labels:  app=test-nginx-apps
I0211 22:09:00.794]            pod-template-hash=7875bf5c8b
I0211 22:09:00.794]   Containers:
I0211 22:09:00.795]    nginx:
I0211 22:09:00.795]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
W0211 22:09:05.013] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0211 22:09:05.013] I0211 22:09:04.539678   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922939-7381", Name:"nginx", UID:"a4927997-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1880", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6458c7c55b to 1
W0211 22:09:05.013] I0211 22:09:04.542858   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922939-7381", Name:"nginx-6458c7c55b", UID:"a4e3824a-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1881", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6458c7c55b-p2975
I0211 22:09:05.996] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 22:09:06.178] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 22:09:06.268] (Bdeployment.extensions/nginx rolled back
W0211 22:09:06.369] error: unable to find specified revision 1000000 in history
I0211 22:09:07.361] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 22:09:07.458] (Bdeployment.extensions/nginx paused
W0211 22:09:07.572] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0211 22:09:07.672] deployment.extensions/nginx resumed
I0211 22:09:07.783] deployment.extensions/nginx rolled back
I0211 22:09:07.977]     deployment.kubernetes.io/revision-history: 1,3
W0211 22:09:08.171] error: desired revision (3) is different from the running revision (5)
I0211 22:09:08.341] deployment.apps/nginx2 created
I0211 22:09:08.438] deployment.extensions "nginx2" deleted
I0211 22:09:08.536] deployment.extensions "nginx" deleted
I0211 22:09:08.634] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:09:08.784] (Bdeployment.apps/nginx-deployment created
W0211 22:09:08.885] I0211 22:09:08.344187   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922939-7381", Name:"nginx2", UID:"a7276886-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1911", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-78cb9c866 to 3
... skipping 10 lines ...
I0211 22:09:09.160] (Bdeployment.extensions/nginx-deployment image updated
I0211 22:09:09.256] apps.sh:337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 22:09:09.344] (Bapps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 22:09:09.531] (Bdeployment.extensions/nginx-deployment image updated
W0211 22:09:09.632] I0211 22:09:09.163568   57605 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549922939-7381", Name:"nginx-deployment", UID:"a76b1ddd-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1959", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5bfd55c857 to 1
W0211 22:09:09.633] I0211 22:09:09.165667   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922939-7381", Name:"nginx-deployment-5bfd55c857", UID:"a7a4fea7-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1960", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5bfd55c857-z89sb
W0211 22:09:09.633] error: unable to find container named "redis"
I0211 22:09:09.733] apps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 22:09:09.746] (Bapps.sh:344: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 22:09:09.828] (Bdeployment.apps/nginx-deployment image updated
I0211 22:09:09.925] apps.sh:347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 22:09:10.024] (Bapps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 22:09:10.190] (Bapps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 47 lines ...
I0211 22:09:12.589] deployment.extensions/nginx-deployment env updated
I0211 22:09:12.677] deployment.extensions/nginx-deployment env updated
I0211 22:09:12.757] deployment.extensions "nginx-deployment" deleted
I0211 22:09:12.843] configmap "test-set-env-config" deleted
I0211 22:09:12.930] secret "test-set-env-secret" deleted
I0211 22:09:12.950] +++ exit code: 0
W0211 22:09:13.051] E0211 22:09:12.607771   57605 replica_set.go:450] Sync "namespace-1549922939-7381/nginx-deployment-79b6f6d8f5" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-79b6f6d8f5": the object has been modified; please apply your changes to the latest version and try again
W0211 22:09:13.051] I0211 22:09:12.762450   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922939-7381", Name:"nginx-deployment-79b6f6d8f5", UID:"a8cf2cd2-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-79b6f6d8f5-sfdw4
W0211 22:09:13.052] E0211 22:09:12.857745   57605 replica_set.go:450] Sync "namespace-1549922939-7381/nginx-deployment-5b4bdf69f4" failed with replicasets.apps "nginx-deployment-5b4bdf69f4" not found
W0211 22:09:13.058] E0211 22:09:13.057938   57605 replica_set.go:450] Sync "namespace-1549922939-7381/nginx-deployment-687fbc687d" failed with replicasets.apps "nginx-deployment-687fbc687d" not found
W0211 22:09:13.108] E0211 22:09:13.107793   57605 replica_set.go:450] Sync "namespace-1549922939-7381/nginx-deployment-5cc58864fb" failed with replicasets.apps "nginx-deployment-5cc58864fb" not found
W0211 22:09:13.158] E0211 22:09:13.157931   57605 replica_set.go:450] Sync "namespace-1549922939-7381/nginx-deployment-79b6f6d8f5" failed with replicasets.apps "nginx-deployment-79b6f6d8f5" not found
I0211 22:09:14.102] Recording: run_rs_tests
I0211 22:09:14.102] Running command: run_rs_tests
I0211 22:09:14.120] 
I0211 22:09:14.122] +++ Running case: test-cmd.run_rs_tests 
I0211 22:09:14.124] +++ working dir: /go/src/k8s.io/kubernetes
I0211 22:09:14.127] +++ command: run_rs_tests
... skipping 31 lines ...
I0211 22:09:15.999] Namespace:    namespace-1549922954-17489
I0211 22:09:15.999] Selector:     app=guestbook,tier=frontend
I0211 22:09:15.999] Labels:       app=guestbook
I0211 22:09:15.999]               tier=frontend
I0211 22:09:15.999] Annotations:  <none>
I0211 22:09:15.999] Replicas:     3 current / 3 desired
I0211 22:09:16.000] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:16.000] Pod Template:
I0211 22:09:16.000]   Labels:  app=guestbook
I0211 22:09:16.000]            tier=frontend
I0211 22:09:16.000]   Containers:
I0211 22:09:16.000]    php-redis:
I0211 22:09:16.000]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 22:09:16.103] Namespace:    namespace-1549922954-17489
I0211 22:09:16.103] Selector:     app=guestbook,tier=frontend
I0211 22:09:16.103] Labels:       app=guestbook
I0211 22:09:16.103]               tier=frontend
I0211 22:09:16.103] Annotations:  <none>
I0211 22:09:16.103] Replicas:     3 current / 3 desired
I0211 22:09:16.103] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:16.104] Pod Template:
I0211 22:09:16.104]   Labels:  app=guestbook
I0211 22:09:16.104]            tier=frontend
I0211 22:09:16.104]   Containers:
I0211 22:09:16.104]    php-redis:
I0211 22:09:16.104]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0211 22:09:16.198] Namespace:    namespace-1549922954-17489
I0211 22:09:16.198] Selector:     app=guestbook,tier=frontend
I0211 22:09:16.198] Labels:       app=guestbook
I0211 22:09:16.198]               tier=frontend
I0211 22:09:16.198] Annotations:  <none>
I0211 22:09:16.198] Replicas:     3 current / 3 desired
I0211 22:09:16.198] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:16.199] Pod Template:
I0211 22:09:16.199]   Labels:  app=guestbook
I0211 22:09:16.199]            tier=frontend
I0211 22:09:16.199]   Containers:
I0211 22:09:16.199]    php-redis:
I0211 22:09:16.199]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0211 22:09:16.292] Namespace:    namespace-1549922954-17489
I0211 22:09:16.292] Selector:     app=guestbook,tier=frontend
I0211 22:09:16.293] Labels:       app=guestbook
I0211 22:09:16.293]               tier=frontend
I0211 22:09:16.293] Annotations:  <none>
I0211 22:09:16.293] Replicas:     3 current / 3 desired
I0211 22:09:16.293] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:16.293] Pod Template:
I0211 22:09:16.293]   Labels:  app=guestbook
I0211 22:09:16.293]            tier=frontend
I0211 22:09:16.294]   Containers:
I0211 22:09:16.294]    php-redis:
I0211 22:09:16.294]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 21 lines ...
I0211 22:09:16.498] Namespace:    namespace-1549922954-17489
I0211 22:09:16.499] Selector:     app=guestbook,tier=frontend
I0211 22:09:16.499] Labels:       app=guestbook
I0211 22:09:16.499]               tier=frontend
I0211 22:09:16.499] Annotations:  <none>
I0211 22:09:16.499] Replicas:     3 current / 3 desired
I0211 22:09:16.499] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:16.499] Pod Template:
I0211 22:09:16.499]   Labels:  app=guestbook
I0211 22:09:16.499]            tier=frontend
I0211 22:09:16.500]   Containers:
I0211 22:09:16.500]    php-redis:
I0211 22:09:16.500]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 22:09:16.510] Namespace:    namespace-1549922954-17489
I0211 22:09:16.510] Selector:     app=guestbook,tier=frontend
I0211 22:09:16.510] Labels:       app=guestbook
I0211 22:09:16.510]               tier=frontend
I0211 22:09:16.510] Annotations:  <none>
I0211 22:09:16.511] Replicas:     3 current / 3 desired
I0211 22:09:16.511] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:16.511] Pod Template:
I0211 22:09:16.511]   Labels:  app=guestbook
I0211 22:09:16.511]            tier=frontend
I0211 22:09:16.511]   Containers:
I0211 22:09:16.511]    php-redis:
I0211 22:09:16.511]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 22:09:16.601] Namespace:    namespace-1549922954-17489
I0211 22:09:16.601] Selector:     app=guestbook,tier=frontend
I0211 22:09:16.601] Labels:       app=guestbook
I0211 22:09:16.601]               tier=frontend
I0211 22:09:16.602] Annotations:  <none>
I0211 22:09:16.602] Replicas:     3 current / 3 desired
I0211 22:09:16.602] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:16.602] Pod Template:
I0211 22:09:16.602]   Labels:  app=guestbook
I0211 22:09:16.602]            tier=frontend
I0211 22:09:16.602]   Containers:
I0211 22:09:16.602]    php-redis:
I0211 22:09:16.603]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0211 22:09:16.698] Namespace:    namespace-1549922954-17489
I0211 22:09:16.698] Selector:     app=guestbook,tier=frontend
I0211 22:09:16.698] Labels:       app=guestbook
I0211 22:09:16.698]               tier=frontend
I0211 22:09:16.698] Annotations:  <none>
I0211 22:09:16.698] Replicas:     3 current / 3 desired
I0211 22:09:16.698] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:16.699] Pod Template:
I0211 22:09:16.699]   Labels:  app=guestbook
I0211 22:09:16.699]            tier=frontend
I0211 22:09:16.699]   Containers:
I0211 22:09:16.699]    php-redis:
I0211 22:09:16.699]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0211 22:09:22.059] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 22:09:22.142] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0211 22:09:22.209] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0211 22:09:22.310] I0211 22:09:21.667992   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922954-17489", Name:"frontend", UID:"af189ac3-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dnr9g
W0211 22:09:22.310] I0211 22:09:21.670553   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922954-17489", Name:"frontend", UID:"af189ac3-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mm6pj
W0211 22:09:22.311] I0211 22:09:21.670658   57605 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549922954-17489", Name:"frontend", UID:"af189ac3-2e49-11e9-a159-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-z5q6n
W0211 22:09:22.311] Error: required flag(s) "max" not set
W0211 22:09:22.311] 
W0211 22:09:22.311] 
W0211 22:09:22.311] Examples:
W0211 22:09:22.312]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0211 22:09:22.312]   kubectl autoscale deployment foo --min=2 --max=10
W0211 22:09:22.312]   
... skipping 85 lines ...
I0211 22:09:24.986] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 22:09:25.066] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0211 22:09:25.158] (Bstatefulset.apps/nginx rolled back
I0211 22:09:25.246] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0211 22:09:25.326] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 22:09:25.418] (BSuccessful
I0211 22:09:25.418] message:error: unable to find specified revision 1000000 in history
I0211 22:09:25.419] has:unable to find specified revision
I0211 22:09:25.501] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0211 22:09:25.584] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 22:09:25.673] (Bstatefulset.apps/nginx rolled back
I0211 22:09:25.759] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0211 22:09:25.842] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 61 lines ...
I0211 22:09:27.509] Name:         mock
I0211 22:09:27.510] Namespace:    namespace-1549922966-13336
I0211 22:09:27.510] Selector:     app=mock
I0211 22:09:27.510] Labels:       app=mock
I0211 22:09:27.510] Annotations:  <none>
I0211 22:09:27.510] Replicas:     1 current / 1 desired
I0211 22:09:27.510] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:27.510] Pod Template:
I0211 22:09:27.510]   Labels:  app=mock
I0211 22:09:27.511]   Containers:
I0211 22:09:27.511]    mock-container:
I0211 22:09:27.511]     Image:        k8s.gcr.io/pause:2.0
I0211 22:09:27.511]     Port:         9949/TCP
... skipping 56 lines ...
I0211 22:09:29.609] Name:         mock
I0211 22:09:29.609] Namespace:    namespace-1549922966-13336
I0211 22:09:29.609] Selector:     app=mock
I0211 22:09:29.609] Labels:       app=mock
I0211 22:09:29.609] Annotations:  <none>
I0211 22:09:29.609] Replicas:     1 current / 1 desired
I0211 22:09:29.609] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:29.610] Pod Template:
I0211 22:09:29.610]   Labels:  app=mock
I0211 22:09:29.610]   Containers:
I0211 22:09:29.610]    mock-container:
I0211 22:09:29.610]     Image:        k8s.gcr.io/pause:2.0
I0211 22:09:29.610]     Port:         9949/TCP
... skipping 56 lines ...
I0211 22:09:31.659] Name:         mock
I0211 22:09:31.659] Namespace:    namespace-1549922966-13336
I0211 22:09:31.659] Selector:     app=mock
I0211 22:09:31.659] Labels:       app=mock
I0211 22:09:31.659] Annotations:  <none>
I0211 22:09:31.659] Replicas:     1 current / 1 desired
I0211 22:09:31.659] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:31.659] Pod Template:
I0211 22:09:31.659]   Labels:  app=mock
I0211 22:09:31.660]   Containers:
I0211 22:09:31.660]    mock-container:
I0211 22:09:31.660]     Image:        k8s.gcr.io/pause:2.0
I0211 22:09:31.660]     Port:         9949/TCP
... skipping 42 lines ...
I0211 22:09:33.702] Namespace:    namespace-1549922966-13336
I0211 22:09:33.702] Selector:     app=mock
I0211 22:09:33.702] Labels:       app=mock
I0211 22:09:33.702]               status=replaced
I0211 22:09:33.702] Annotations:  <none>
I0211 22:09:33.702] Replicas:     1 current / 1 desired
I0211 22:09:33.703] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:33.703] Pod Template:
I0211 22:09:33.703]   Labels:  app=mock
I0211 22:09:33.703]   Containers:
I0211 22:09:33.703]    mock-container:
I0211 22:09:33.703]     Image:        k8s.gcr.io/pause:2.0
I0211 22:09:33.703]     Port:         9949/TCP
... skipping 11 lines ...
I0211 22:09:33.704] Namespace:    namespace-1549922966-13336
I0211 22:09:33.705] Selector:     app=mock2
I0211 22:09:33.705] Labels:       app=mock2
I0211 22:09:33.705]               status=replaced
I0211 22:09:33.705] Annotations:  <none>
I0211 22:09:33.705] Replicas:     1 current / 1 desired
I0211 22:09:33.705] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 22:09:33.705] Pod Template:
I0211 22:09:33.705]   Labels:  app=mock2
I0211 22:09:33.705]   Containers:
I0211 22:09:33.705]    mock-container:
I0211 22:09:33.705]     Image:        k8s.gcr.io/pause:2.0
I0211 22:09:33.706]     Port:         9949/TCP
... skipping 110 lines ...
I0211 22:09:38.853] (Bpersistentvolume/pv0001 created
I0211 22:09:38.956] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0211 22:09:39.035] (Bpersistentvolume "pv0001" deleted
I0211 22:09:39.205] persistentvolume/pv0002 created
I0211 22:09:39.307] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0211 22:09:39.393] (Bpersistentvolume "pv0002" deleted
W0211 22:09:39.494] E0211 22:09:39.207484   57605 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
W0211 22:09:39.558] E0211 22:09:39.558242   57605 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0211 22:09:39.659] persistentvolume/pv0003 created
I0211 22:09:39.659] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0211 22:09:39.733] (Bpersistentvolume "pv0003" deleted
I0211 22:09:39.834] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 22:09:39.850] (B+++ exit code: 0
I0211 22:09:39.891] Recording: run_persistent_volume_claims_tests
... skipping 467 lines ...
I0211 22:09:44.270] yes
I0211 22:09:44.270] has:the server doesn't have a resource type
I0211 22:09:44.340] Successful
I0211 22:09:44.340] message:yes
I0211 22:09:44.340] has:yes
I0211 22:09:44.414] Successful
I0211 22:09:44.414] message:error: --subresource can not be used with NonResourceURL
I0211 22:09:44.414] has:subresource can not be used with NonResourceURL
I0211 22:09:44.490] Successful
I0211 22:09:44.574] Successful
I0211 22:09:44.574] message:yes
I0211 22:09:44.574] 0
I0211 22:09:44.574] has:0
... skipping 6 lines ...
I0211 22:09:44.767] role.rbac.authorization.k8s.io/testing-R reconciled
I0211 22:09:44.860] legacy-script.sh:745: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0211 22:09:44.955] (Blegacy-script.sh:746: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0211 22:09:45.046] (Blegacy-script.sh:747: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0211 22:09:45.140] (Blegacy-script.sh:748: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0211 22:09:45.220] (BSuccessful
I0211 22:09:45.220] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0211 22:09:45.220] has:only rbac.authorization.k8s.io/v1 is supported
I0211 22:09:45.306] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0211 22:09:45.312] role.rbac.authorization.k8s.io "testing-R" deleted
I0211 22:09:45.320] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0211 22:09:45.327] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0211 22:09:45.337] Recording: run_retrieve_multiple_tests
... skipping 1017 lines ...
I0211 22:10:12.868] message:node/127.0.0.1 already uncordoned (dry run)
I0211 22:10:12.868] has:already uncordoned
I0211 22:10:12.959] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0211 22:10:13.041] (Bnode/127.0.0.1 labeled
I0211 22:10:13.140] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0211 22:10:13.215] (BSuccessful
I0211 22:10:13.215] message:error: cannot specify both a node name and a --selector option
I0211 22:10:13.216] See 'kubectl drain -h' for help and examples
I0211 22:10:13.216] has:cannot specify both a node name
I0211 22:10:13.289] Successful
I0211 22:10:13.290] message:error: USAGE: cordon NODE [flags]
I0211 22:10:13.290] See 'kubectl cordon -h' for help and examples
I0211 22:10:13.290] has:error\: USAGE\: cordon NODE
I0211 22:10:13.368] node/127.0.0.1 already uncordoned
I0211 22:10:13.444] Successful
I0211 22:10:13.444] message:error: You must provide one or more resources by argument or filename.
I0211 22:10:13.444] Example resource specifications include:
I0211 22:10:13.444]    '-f rsrc.yaml'
I0211 22:10:13.444]    '--filename=rsrc.json'
I0211 22:10:13.444]    '<resource> <name>'
I0211 22:10:13.445]    '<resource>'
I0211 22:10:13.445] has:must provide one or more resources
... skipping 15 lines ...
I0211 22:10:13.882] Successful
I0211 22:10:13.882] message:The following compatible plugins are available:
I0211 22:10:13.882] 
I0211 22:10:13.882] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0211 22:10:13.883]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0211 22:10:13.883] 
I0211 22:10:13.883] error: one plugin warning was found
I0211 22:10:13.883] has:kubectl-version overwrites existing command: "kubectl version"
I0211 22:10:13.958] Successful
I0211 22:10:13.958] message:The following compatible plugins are available:
I0211 22:10:13.958] 
I0211 22:10:13.958] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 22:10:13.958] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0211 22:10:13.958]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 22:10:13.958] 
I0211 22:10:13.959] error: one plugin warning was found
I0211 22:10:13.959] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0211 22:10:14.028] Successful
I0211 22:10:14.028] message:The following compatible plugins are available:
I0211 22:10:14.028] 
I0211 22:10:14.028] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 22:10:14.029] has:plugins are available
I0211 22:10:14.097] Successful
I0211 22:10:14.097] message:
I0211 22:10:14.098] error: unable to find any kubectl plugins in your PATH
I0211 22:10:14.098] has:unable to find any kubectl plugins in your PATH
I0211 22:10:14.164] Successful
I0211 22:10:14.165] message:I am plugin foo
I0211 22:10:14.165] has:plugin foo
I0211 22:10:14.235] Successful
I0211 22:10:14.235] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.2.537+43447e2bbf0131", GitCommit:"43447e2bbf01317243b5728b59a46d0f23cddc77", GitTreeState:"clean", BuildDate:"2019-02-11T22:02:46Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0211 22:10:14.346] 
I0211 22:10:14.349] +++ Running case: test-cmd.run_impersonation_tests 
I0211 22:10:14.351] +++ working dir: /go/src/k8s.io/kubernetes
I0211 22:10:14.353] +++ command: run_impersonation_tests
I0211 22:10:14.363] +++ [0211 22:10:14] Testing impersonation
I0211 22:10:14.434] Successful
I0211 22:10:14.434] message:error: requesting groups or user-extra for  without impersonating a user
I0211 22:10:14.435] has:without impersonating a user
I0211 22:10:14.600] certificatesigningrequest.certificates.k8s.io/foo created
I0211 22:10:14.698] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0211 22:10:14.787] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0211 22:10:14.866] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0211 22:10:15.057] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 187 lines ...
W0211 22:10:18.254] I0211 22:10:18.233460   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:10:18.254] I0211 22:10:18.236009   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:10:18.254] I0211 22:10:18.233467   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:10:18.254] I0211 22:10:18.236023   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:10:18.255] I0211 22:10:18.233471   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:10:18.255] I0211 22:10:18.236035   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:10:18.255] W0211 22:10:18.233588   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.255] W0211 22:10:18.233603   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.255] W0211 22:10:18.233628   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.256] W0211 22:10:18.233638   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.256] W0211 22:10:18.233665   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.256] W0211 22:10:18.233668   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.256] W0211 22:10:18.233671   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.257] W0211 22:10:18.233676   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.257] W0211 22:10:18.233687   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.257] W0211 22:10:18.233700   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.257] W0211 22:10:18.233701   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.257] W0211 22:10:18.233720   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.258] W0211 22:10:18.233724   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.258] W0211 22:10:18.233732   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.258] W0211 22:10:18.233732   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.258] W0211 22:10:18.233743   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.259] W0211 22:10:18.233742   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.259] W0211 22:10:18.233753   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.259] W0211 22:10:18.233763   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.259] W0211 22:10:18.233763   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.259] W0211 22:10:18.233767   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.260] W0211 22:10:18.233780   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.260] W0211 22:10:18.233787   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.260] W0211 22:10:18.233794   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.260] W0211 22:10:18.233791   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.261] W0211 22:10:18.233799   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.261] W0211 22:10:18.233816   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.261] W0211 22:10:18.233820   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.261] W0211 22:10:18.233829   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.261] W0211 22:10:18.233839   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.262] W0211 22:10:18.233840   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.262] W0211 22:10:18.233853   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.262] W0211 22:10:18.233848   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.262] W0211 22:10:18.233858   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.263] W0211 22:10:18.233870   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.263] W0211 22:10:18.233872   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.263] W0211 22:10:18.233886   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.263] W0211 22:10:18.233888   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.263] W0211 22:10:18.233892   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.264] W0211 22:10:18.233908   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.264] W0211 22:10:18.233909   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.264] W0211 22:10:18.233915   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.264] W0211 22:10:18.233916   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.265] W0211 22:10:18.233923   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.265] W0211 22:10:18.233932   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.265] W0211 22:10:18.233940   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.265] W0211 22:10:18.233947   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.265] W0211 22:10:18.233943   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.266] W0211 22:10:18.233954   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.266] W0211 22:10:18.233959   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.266] W0211 22:10:18.233970   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.266] W0211 22:10:18.233980   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.266] W0211 22:10:18.233985   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.267] W0211 22:10:18.233986   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.267] W0211 22:10:18.233978   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.267] W0211 22:10:18.234006   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.267] W0211 22:10:18.234005   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.268] W0211 22:10:18.234018   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.268] W0211 22:10:18.234033   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.268] W0211 22:10:18.234038   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.268] W0211 22:10:18.234032   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.268] W0211 22:10:18.234078   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.269] W0211 22:10:18.234100   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.269] W0211 22:10:18.234100   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.269] W0211 22:10:18.234102   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.269] W0211 22:10:18.234106   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.270] W0211 22:10:18.234107   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.270] W0211 22:10:18.234127   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.270] W0211 22:10:18.234144   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.270] W0211 22:10:18.234150   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.270] W0211 22:10:18.234167   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.271] W0211 22:10:18.234173   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.271] W0211 22:10:18.234175   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.271] W0211 22:10:18.234198   54241 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 22:10:18.271] I0211 22:10:18.234575   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:10:18.271] I0211 22:10:18.234693   54241 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 22:10:18.293] make: *** [test-cmd] Error 1
I0211 22:10:18.394] No resources found
I0211 22:10:18.394] No resources found
I0211 22:10:18.394] FAILED TESTS: run_crd_tests, 
I0211 22:10:18.395] junit report dir: /workspace/artifacts
I0211 22:10:18.395] +++ [0211 22:10:18] Clean up complete
I0211 22:10:18.395] Makefile:294: recipe for target 'test-cmd' failed
W0211 22:10:19.823] Traceback (most recent call last):
W0211 22:10:19.823]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0211 22:10:19.823]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0211 22:10:19.824]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0211 22:10:19.824]     check(*cmd)
W0211 22:10:19.824]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0211 22:10:19.824]     subprocess.check_call(cmd)
W0211 22:10:19.824]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0211 22:10:19.824]     raise CalledProcessError(retcode, cmd)
W0211 22:10:19.825] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20190125-cc5d6ecff3', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0211 22:10:19.829] Command failed
I0211 22:10:19.830] process 671 exited with code 1 after 14.7m
E0211 22:10:19.830] FAIL: pull-kubernetes-integration
I0211 22:10:19.830] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0211 22:10:20.346] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0211 22:10:20.397] process 96650 exited with code 0 after 0.0m
I0211 22:10:20.398] Call:  gcloud config get-value account
I0211 22:10:20.715] process 96662 exited with code 0 after 0.0m
I0211 22:10:20.715] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0211 22:10:20.715] Upload result and artifacts...
I0211 22:10:20.715] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73934/pull-kubernetes-integration/44370
I0211 22:10:20.716] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73934/pull-kubernetes-integration/44370/artifacts
W0211 22:10:21.874] CommandException: One or more URLs matched no objects.
E0211 22:10:22.025] Command failed
I0211 22:10:22.026] process 96674 exited with code 1 after 0.0m
W0211 22:10:22.026] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73934/pull-kubernetes-integration/44370/artifacts not exist yet
I0211 22:10:22.026] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73934/pull-kubernetes-integration/44370/artifacts
I0211 22:10:24.026] process 96816 exited with code 0 after 0.0m
W0211 22:10:24.027] metadata path /workspace/_artifacts/metadata.json does not exist
W0211 22:10:24.027] metadata not found or invalid, init with empty metadata
... skipping 22 lines ...