This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdanielvmw: Remove unused variables from computePodPhase
ResultFAILURE
Tests 0 failed / 86 succeeded
Started2019-03-20 23:32
Elapsed15m37s
Revision
Buildergke-prow-containerd-pool-99179761-8gh5
Refs master:4940eae4
75528:b5d60be3
pod58b9a074-4b68-11e9-b1a5-0a580a6c1332
infra-commitff8e567a0
pod58b9a074-4b68-11e9-b1a5-0a580a6c1332
repok8s.io/kubernetes
repo-commit4acbdd9c3a9f201e9ddf63d985b72f8a761b0397
repos{u'k8s.io/kubernetes': u'master:4940eae478248670cbed1bcde15def96229b5c7e,75528:b5d60be3ba0db8515ed8763a65f8ad66ced1f6bc'}

No Test Failures!


Show 86 Passed Tests

Error lines from build-log.txt

... skipping 309 lines ...
W0320 23:43:40.973] I0320 23:43:40.973020   46595 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0320 23:43:41.041] I0320 23:43:40.973114   46595 server.go:559] external host was not specified, using 172.17.0.2
W0320 23:43:41.042] W0320 23:43:40.973126   46595 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0320 23:43:41.042] I0320 23:43:40.973418   46595 server.go:146] Version: v1.15.0-alpha.0.1375+4acbdd9c3a9f20
W0320 23:43:41.374] I0320 23:43:41.373251   46595 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0320 23:43:41.375] I0320 23:43:41.373283   46595 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0320 23:43:41.375] E0320 23:43:41.373919   46595 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:41.376] E0320 23:43:41.373967   46595 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:41.377] E0320 23:43:41.374010   46595 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:41.377] E0320 23:43:41.374048   46595 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:41.378] E0320 23:43:41.374075   46595 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:41.378] E0320 23:43:41.374094   46595 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:41.379] I0320 23:43:41.374112   46595 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0320 23:43:41.379] I0320 23:43:41.374124   46595 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0320 23:43:41.380] I0320 23:43:41.376134   46595 client.go:352] parsed scheme: ""
W0320 23:43:41.380] I0320 23:43:41.376156   46595 client.go:352] scheme "" not registered, fallback to default scheme
W0320 23:43:41.380] I0320 23:43:41.376222   46595 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0320 23:43:41.381] I0320 23:43:41.376378   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0320 23:43:42.198] W0320 23:43:42.197840   46595 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0320 23:43:42.372] I0320 23:43:42.371343   46595 client.go:352] parsed scheme: ""
W0320 23:43:42.372] I0320 23:43:42.371398   46595 client.go:352] scheme "" not registered, fallback to default scheme
W0320 23:43:42.373] I0320 23:43:42.371474   46595 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0320 23:43:42.373] I0320 23:43:42.371545   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:43:42.374] I0320 23:43:42.372848   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:43:43.462] E0320 23:43:43.458734   46595 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:43.462] E0320 23:43:43.458803   46595 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:43.463] E0320 23:43:43.458949   46595 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:43.463] E0320 23:43:43.459003   46595 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:43.463] E0320 23:43:43.459030   46595 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:43.463] E0320 23:43:43.459052   46595 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0320 23:43:43.464] I0320 23:43:43.459092   46595 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0320 23:43:43.464] I0320 23:43:43.459123   46595 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0320 23:43:43.464] I0320 23:43:43.461078   46595 client.go:352] parsed scheme: ""
W0320 23:43:43.465] I0320 23:43:43.461105   46595 client.go:352] scheme "" not registered, fallback to default scheme
W0320 23:43:43.465] I0320 23:43:43.461184   46595 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0320 23:43:43.465] I0320 23:43:43.461278   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 278 lines ...
W0320 23:44:34.223] I0320 23:44:34.221900   49511 controller_utils.go:1027] Waiting for caches to sync for expand controller
W0320 23:44:34.234] I0320 23:44:34.233426   49511 controllermanager.go:497] Started "namespace"
W0320 23:44:34.234] I0320 23:44:34.233864   49511 controllermanager.go:497] Started "cronjob"
W0320 23:44:34.235] I0320 23:44:34.234617   49511 namespace_controller.go:186] Starting namespace controller
W0320 23:44:34.235] I0320 23:44:34.234679   49511 controller_utils.go:1027] Waiting for caches to sync for namespace controller
W0320 23:44:34.236] I0320 23:44:34.236090   49511 cronjob_controller.go:94] Starting CronJob Manager
W0320 23:44:34.240] E0320 23:44:34.236536   49511 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0320 23:44:34.241] W0320 23:44:34.240621   49511 controllermanager.go:489] Skipping "service"
W0320 23:44:34.241] W0320 23:44:34.241058   49511 controllermanager.go:489] Skipping "ttl-after-finished"
W0320 23:44:34.242] I0320 23:44:34.242108   49511 controllermanager.go:497] Started "endpoint"
W0320 23:44:34.242] I0320 23:44:34.242466   49511 endpoints_controller.go:166] Starting endpoint controller
W0320 23:44:34.243] I0320 23:44:34.242504   49511 controller_utils.go:1027] Waiting for caches to sync for endpoint controller
I0320 23:44:34.394] node/127.0.0.1 created
... skipping 39 lines ...
W0320 23:44:34.876] I0320 23:44:34.874674   49511 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
W0320 23:44:34.876] I0320 23:44:34.874824   49511 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
W0320 23:44:34.877] I0320 23:44:34.875005   49511 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0320 23:44:34.877] I0320 23:44:34.875057   49511 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
W0320 23:44:34.877] I0320 23:44:34.875101   49511 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W0320 23:44:34.877] I0320 23:44:34.875260   49511 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
W0320 23:44:34.878] E0320 23:44:34.875440   49511 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0320 23:44:34.878] I0320 23:44:34.875621   49511 controllermanager.go:497] Started "resourcequota"
W0320 23:44:34.878] I0320 23:44:34.875655   49511 resource_quota_controller.go:276] Starting resource quota controller
W0320 23:44:34.878] I0320 23:44:34.876097   49511 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W0320 23:44:34.878] I0320 23:44:34.876262   49511 resource_quota_monitor.go:301] QuotaMonitor running
W0320 23:44:34.879] I0320 23:44:34.877065   49511 controllermanager.go:497] Started "job"
W0320 23:44:34.879] I0320 23:44:34.877199   49511 job_controller.go:143] Starting job controller
... skipping 8 lines ...
W0320 23:44:34.881] I0320 23:44:34.880604   49511 pv_protection_controller.go:81] Starting PV protection controller
W0320 23:44:34.881] I0320 23:44:34.880940   49511 controller_utils.go:1027] Waiting for caches to sync for PV protection controller
W0320 23:44:34.882] I0320 23:44:34.882107   49511 controllermanager.go:497] Started "statefulset"
W0320 23:44:34.882] I0320 23:44:34.882125   49511 stateful_set.go:151] Starting stateful set controller
W0320 23:44:34.882] I0320 23:44:34.882158   49511 controller_utils.go:1027] Waiting for caches to sync for stateful set controller
W0320 23:44:34.883] I0320 23:44:34.882641   49511 node_lifecycle_controller.go:77] Sending events to api server
W0320 23:44:34.883] E0320 23:44:34.882897   49511 core.go:161] failed to start cloud node lifecycle controller: no cloud provider provided
W0320 23:44:34.883] W0320 23:44:34.883090   49511 controllermanager.go:489] Skipping "cloud-node-lifecycle"
W0320 23:44:34.884] I0320 23:44:34.883905   49511 controllermanager.go:497] Started "clusterrole-aggregation"
W0320 23:44:34.884] I0320 23:44:34.884041   49511 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0320 23:44:34.884] I0320 23:44:34.884172   49511 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
W0320 23:44:34.885] I0320 23:44:34.884918   49511 controllermanager.go:497] Started "pvc-protection"
W0320 23:44:34.885] I0320 23:44:34.884957   49511 pvc_protection_controller.go:99] Starting PVC protection controller
... skipping 11 lines ...
W0320 23:44:34.891] I0320 23:44:34.891151   49511 node_lifecycle_controller.go:401] Controller will taint node by condition.
W0320 23:44:34.892] I0320 23:44:34.891190   49511 controllermanager.go:497] Started "nodelifecycle"
W0320 23:44:34.892] I0320 23:44:34.891228   49511 core.go:171] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0320 23:44:34.892] W0320 23:44:34.891236   49511 controllermanager.go:489] Skipping "route"
W0320 23:44:34.893] I0320 23:44:34.891999   49511 node_lifecycle_controller.go:425] Starting node controller
W0320 23:44:34.893] I0320 23:44:34.892029   49511 controller_utils.go:1027] Waiting for caches to sync for taint controller
W0320 23:44:34.962] W0320 23:44:34.961502   49511 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0320 23:44:34.987] I0320 23:44:34.987294   49511 controller_utils.go:1034] Caches are synced for PV protection controller
W0320 23:44:34.988] I0320 23:44:34.987749   49511 controller_utils.go:1034] Caches are synced for certificate controller
W0320 23:44:35.001] I0320 23:44:35.000633   49511 controller_utils.go:1034] Caches are synced for service account controller
W0320 23:44:35.005] I0320 23:44:35.005007   46595 controller.go:606] quota admission added evaluator for: serviceaccounts
W0320 23:44:35.021] I0320 23:44:35.020915   49511 controller_utils.go:1034] Caches are synced for TTL controller
W0320 23:44:35.035] I0320 23:44:35.034916   49511 controller_utils.go:1034] Caches are synced for namespace controller
... skipping 17 lines ...
I0320 23:44:35.449]   "buildDate": "2019-03-20T23:42:22Z",
I0320 23:44:35.449]   "goVersion": "go1.12.1",
I0320 23:44:35.449]   "compiler": "gc",
I0320 23:44:35.449]   "platform": "linux/amd64"
I0320 23:44:35.550] }+++ [0320 23:44:35] Testing kubectl version: check client only output matches expected output
W0320 23:44:35.651] I0320 23:44:35.584397   49511 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W0320 23:44:35.651] E0320 23:44:35.607372   49511 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0320 23:44:35.652] E0320 23:44:35.616294   49511 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0320 23:44:35.652] E0320 23:44:35.639298   49511 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0320 23:44:35.652] E0320 23:44:35.639395   49511 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0320 23:44:35.652] I0320 23:44:35.642662   49511 controller_utils.go:1034] Caches are synced for endpoint controller
W0320 23:44:35.665] I0320 23:44:35.664484   49511 controller_utils.go:1034] Caches are synced for persistent volume controller
W0320 23:44:35.666] I0320 23:44:35.665298   49511 controller_utils.go:1034] Caches are synced for HPA controller
W0320 23:44:35.670] I0320 23:44:35.670006   49511 controller_utils.go:1034] Caches are synced for attach detach controller
W0320 23:44:35.678] I0320 23:44:35.677861   49511 controller_utils.go:1034] Caches are synced for job controller
W0320 23:44:35.679] I0320 23:44:35.678697   49511 controller_utils.go:1034] Caches are synced for ReplicaSet controller
... skipping 26 lines ...
I0320 23:44:36.346] Successful: --client --output json has correct client info
I0320 23:44:36.346] (BSuccessful: --client --output json has no server info
I0320 23:44:36.347] (B+++ [0320 23:44:36] Testing kubectl version: compare json output using additional --short flag
I0320 23:44:36.382] Successful: --short --output client json info is equal to non short result
I0320 23:44:36.390] (BSuccessful: --short --output server json info is equal to non short result
I0320 23:44:36.394] (B+++ [0320 23:44:36] Testing kubectl version: compare json output with yaml output
W0320 23:44:36.571] E0320 23:44:36.571221   49511 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0320 23:44:36.672] Successful: --output json/yaml has identical information
I0320 23:44:36.673] (B+++ exit code: 0
I0320 23:44:36.673] Recording: run_kubectl_config_set_tests
I0320 23:44:36.673] Running command: run_kubectl_config_set_tests
I0320 23:44:36.673] 
I0320 23:44:36.673] +++ Running case: test-cmd.run_kubectl_config_set_tests 
... skipping 48 lines ...
I0320 23:44:39.793] +++ [0320 23:44:39] Creating namespace namespace-1553125479-20376
I0320 23:44:39.861] namespace/namespace-1553125479-20376 created
W0320 23:44:39.962] I0320 23:44:39.794984   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:44:39.962] I0320 23:44:39.795313   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:44:40.063] Context "test" modified.
I0320 23:44:40.063] +++ [0320 23:44:39] Testing RESTMapper
I0320 23:44:40.140] +++ [0320 23:44:40] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0320 23:44:40.159] +++ exit code: 0
I0320 23:44:40.332] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0320 23:44:40.333] bindings                                                                      true         Binding
I0320 23:44:40.333] componentstatuses                 cs                                          false        ComponentStatus
I0320 23:44:40.333] configmaps                        cm                                          true         ConfigMap
I0320 23:44:40.333] endpoints                         ep                                          true         Endpoints
... skipping 695 lines ...
I0320 23:45:07.005] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0320 23:45:07.291] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0320 23:45:07.425] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0320 23:45:07.669] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0320 23:45:07.797] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0320 23:45:07.913] (Bpod "valid-pod" force deleted
W0320 23:45:08.014] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0320 23:45:08.015] error: setting 'all' parameter but found a non empty selector. 
W0320 23:45:08.016] I0320 23:45:07.812429   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:45:08.016] I0320 23:45:07.813126   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:45:08.017] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0320 23:45:08.119] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0320 23:45:08.206] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0320 23:45:08.329] (Bnamespace/test-kubectl-describe-pod created
... skipping 17 lines ...
I0320 23:45:10.066] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0320 23:45:10.197] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0320 23:45:10.313] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0320 23:45:10.442] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0320 23:45:10.686] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:45:10.965] (Bpod/env-test-pod created
W0320 23:45:11.066] error: min-available and max-unavailable cannot be both specified
W0320 23:45:11.066] I0320 23:45:10.814214   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:45:11.066] I0320 23:45:10.814437   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:45:11.705] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0320 23:45:11.706] Name:               env-test-pod
I0320 23:45:11.706] Namespace:          test-kubectl-describe-pod
I0320 23:45:11.706] Priority:           0
... skipping 181 lines ...
W0320 23:45:27.854] I0320 23:45:27.826316   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:45:27.955] replicationcontroller "modified" deleted
I0320 23:45:28.328] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:45:28.582] (Bpod/valid-pod created
I0320 23:45:28.749] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0320 23:45:29.044] (BSuccessful
I0320 23:45:29.045] message:Error from server: cannot restore map from string
I0320 23:45:29.045] has:cannot restore map from string
W0320 23:45:29.145] I0320 23:45:28.831169   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:45:29.146] I0320 23:45:28.831868   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:45:29.146] E0320 23:45:29.032208   46595 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0320 23:45:29.247] Successful
I0320 23:45:29.247] message:pod/valid-pod patched (no change)
I0320 23:45:29.247] has:patched (no change)
I0320 23:45:29.344] pod/valid-pod patched
I0320 23:45:29.502] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0320 23:45:29.668] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 8 lines ...
I0320 23:45:30.601] (Bpod/valid-pod patched
I0320 23:45:30.747] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0320 23:45:31.003] (Bpod/valid-pod patched
W0320 23:45:31.103] I0320 23:45:30.832507   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:45:31.104] I0320 23:45:30.832699   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:45:31.204] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0320 23:45:31.419] (B+++ [0320 23:45:31] "kubectl patch with resourceVersion 511" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0320 23:45:31.814] pod "valid-pod" deleted
I0320 23:45:31.833] pod/valid-pod replaced
W0320 23:45:31.934] I0320 23:45:31.832866   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:45:31.934] I0320 23:45:31.833056   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:45:32.047] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0320 23:45:32.282] (BSuccessful
I0320 23:45:32.283] message:error: --grace-period must have --force specified
I0320 23:45:32.283] has:\-\-grace-period must have \-\-force specified
I0320 23:45:32.560] Successful
I0320 23:45:32.560] message:error: --timeout must have --force specified
I0320 23:45:32.561] has:\-\-timeout must have \-\-force specified
I0320 23:45:32.824] node/node-v1-test created
W0320 23:45:32.925] W0320 23:45:32.824776   49511 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
W0320 23:45:32.925] I0320 23:45:32.833253   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:45:32.925] I0320 23:45:32.833574   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:45:33.071] node/node-v1-test replaced
I0320 23:45:33.219] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0320 23:45:33.333] (Bnode "node-v1-test" deleted
I0320 23:45:33.485] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 24 lines ...
I0320 23:45:35.937]     name: kubernetes-pause
I0320 23:45:35.938] has:localonlyvalue
I0320 23:45:35.938] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0320 23:45:36.185] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0320 23:45:36.316] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0320 23:45:36.431] (Bpod/valid-pod labeled
W0320 23:45:36.532] error: 'name' already has a value (valid-pod), and --overwrite is false
I0320 23:45:36.633] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0320 23:45:36.717] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0320 23:45:36.843] (Bpod "valid-pod" force deleted
W0320 23:45:36.943] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0320 23:45:36.971] I0320 23:45:36.836559   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:45:36.971] I0320 23:45:36.836716   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
... skipping 166 lines ...
W0320 23:46:00.928] I0320 23:46:00.927152   46595 trace.go:81] Trace[512879808]: "List /api/v1/namespaces/namespace-1553125559-19358/resourcequotas" (started: 2019-03-20 23:46:00.097651524 +0000 UTC m=+140.395571580) (total time: 829.468668ms):
W0320 23:46:00.928] Trace[512879808]: [829.399676ms] [829.380698ms] Listing from storage done
W0320 23:46:00.930] I0320 23:46:00.929713   46595 trace.go:81] Trace[1054504029]: "Create /api/v1/namespaces/namespace-1553125559-19358/serviceaccounts" (started: 2019-03-20 23:46:00.096540398 +0000 UTC m=+140.394460434) (total time: 833.142963ms):
W0320 23:46:00.930] Trace[1054504029]: [833.036832ms] [832.689413ms] Object stored in database
I0320 23:46:01.035] namespace/namespace-1553125559-19358 created
I0320 23:46:01.546] Context "test" modified.
I0320 23:46:01.546] +++ [0320 23:46:01] Testing kubectl create with error
I0320 23:46:01.577] +++ [0320 23:46:01] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0320 23:46:01.678] Error: must specify one of -f and -k
W0320 23:46:01.852] 
W0320 23:46:01.852] Create a resource from a file or from stdin.
W0320 23:46:01.853] 
W0320 23:46:01.853]  JSON and YAML formats are accepted.
W0320 23:46:01.853] 
W0320 23:46:01.853] Examples:
... skipping 85 lines ...
W0320 23:46:06.297] I0320 23:46:06.297235   46595 client.go:352] scheme "" not registered, fallback to default scheme
W0320 23:46:06.298] I0320 23:46:06.297836   46595 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0320 23:46:06.298] I0320 23:46:06.298448   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:46:06.299] I0320 23:46:06.299497   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:46:06.302] I0320 23:46:06.302220   46595 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0320 23:46:06.403] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0320 23:46:06.504] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0320 23:46:06.604] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0320 23:46:06.613] +++ exit code: 0
I0320 23:46:06.662] Recording: run_kubectl_run_tests
I0320 23:46:06.663] Running command: run_kubectl_run_tests
I0320 23:46:06.688] 
I0320 23:46:06.692] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 122 lines ...
I0320 23:46:13.230] Context "test" modified.
I0320 23:46:13.230] +++ [0320 23:46:13] Testing kubectl create filter
I0320 23:46:13.262] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:13.511] (Bpod/selector-test-pod created
I0320 23:46:13.668] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0320 23:46:13.795] (BSuccessful
I0320 23:46:13.796] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0320 23:46:13.796] has:pods "selector-test-pod-dont-apply" not found
W0320 23:46:13.897] I0320 23:46:13.862746   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:13.897] I0320 23:46:13.862955   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:46:13.998] pod "selector-test-pod" deleted
I0320 23:46:13.998] +++ exit code: 0
I0320 23:46:13.999] Recording: run_kubectl_apply_deployments_tests
... skipping 32 lines ...
I0320 23:46:16.564] (Bapps.sh:131: Successful get deployments my-depl {{.metadata.labels.l2}}: l2
I0320 23:46:16.717] (Bdeployment.extensions "my-depl" deleted
I0320 23:46:16.770] replicaset.extensions "my-depl-64775887d7" deleted
I0320 23:46:16.792] replicaset.extensions "my-depl-656cffcbcc" deleted
I0320 23:46:16.849] pod "my-depl-64775887d7-plgbh" deleted
I0320 23:46:16.859] pod "my-depl-656cffcbcc-khkc4" deleted
W0320 23:46:16.960] E0320 23:46:16.853807   49511 replica_set.go:450] Sync "namespace-1553125574-18988/my-depl-64775887d7" failed with replicasets.apps "my-depl-64775887d7" not found
W0320 23:46:16.961] I0320 23:46:16.864019   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:16.961] I0320 23:46:16.864504   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:46:17.062] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:17.157] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:17.296] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:17.436] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 3 lines ...
W0320 23:46:17.820] I0320 23:46:17.769183   49511 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553125574-18988", Name:"nginx-776cc67f78", UID:"5b083268-4b6a-11e9-bc6f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-cjk97
W0320 23:46:17.821] I0320 23:46:17.770374   49511 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553125574-18988", Name:"nginx-776cc67f78", UID:"5b083268-4b6a-11e9-bc6f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-4zdns
W0320 23:46:17.865] I0320 23:46:17.864713   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:17.866] I0320 23:46:17.864948   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:46:17.966] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0320 23:46:22.734] (BSuccessful
I0320 23:46:22.734] message:Error from server (Conflict): error when applying patch:
I0320 23:46:22.735] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1553125574-18988\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0320 23:46:22.735] to:
I0320 23:46:22.735] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0320 23:46:22.736] Name: "nginx", Namespace: "namespace-1553125574-18988"
I0320 23:46:22.738] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1553125574-18988\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-20T23:46:17Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-20T23:46:17Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-20T23:46:17Z"]] "name":"nginx" "namespace":"namespace-1553125574-18988" "resourceVersion":"634" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1553125574-18988/deployments/nginx" "uid":"5b042ac3-4b6a-11e9-bc6f-0242ac110002"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-20T23:46:17Z" "lastUpdateTime":"2019-03-20T23:46:17Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0320 23:46:22.739] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0320 23:46:22.739] has:Error from server (Conflict)
W0320 23:46:22.839] I0320 23:46:18.867380   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:22.840] I0320 23:46:18.869048   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:46:22.840] I0320 23:46:19.869346   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:22.841] I0320 23:46:19.869580   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:46:22.841] I0320 23:46:20.869766   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:22.841] I0320 23:46:20.870073   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
... skipping 6 lines ...
W0320 23:46:24.873] I0320 23:46:24.872449   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:24.874] I0320 23:46:24.873000   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:46:25.874] I0320 23:46:25.873284   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:25.875] I0320 23:46:25.873513   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:46:26.874] I0320 23:46:26.873658   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:26.875] I0320 23:46:26.874008   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:46:27.222] E0320 23:46:27.221338   49511 replica_set.go:450] Sync "namespace-1553125574-18988/nginx-776cc67f78" failed with replicasets.apps "nginx-776cc67f78" not found
W0320 23:46:27.874] I0320 23:46:27.874184   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:27.875] I0320 23:46:27.874357   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:46:28.161] deployment.extensions/nginx configured
W0320 23:46:28.262] I0320 23:46:28.168192   49511 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1553125574-18988", Name:"nginx", UID:"613db023-4b6a-11e9-bc6f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7bd4fbc645 to 3
W0320 23:46:28.263] I0320 23:46:28.174285   49511 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553125574-18988", Name:"nginx-7bd4fbc645", UID:"613ea9b2-4b6a-11e9-bc6f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-mmrv2
W0320 23:46:28.263] I0320 23:46:28.183955   49511 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553125574-18988", Name:"nginx-7bd4fbc645", UID:"613ea9b2-4b6a-11e9-bc6f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-trs79
... skipping 188 lines ...
W0320 23:46:36.894] I0320 23:46:36.880981   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:46:36.995] namespace/namespace-1553125596-20800 created
I0320 23:46:37.022] Context "test" modified.
I0320 23:46:37.033] +++ [0320 23:46:37] Testing kubectl get
I0320 23:46:37.175] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:37.309] (BSuccessful
I0320 23:46:37.309] message:Error from server (NotFound): pods "abc" not found
I0320 23:46:37.309] has:pods "abc" not found
I0320 23:46:37.442] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:37.570] (BSuccessful
I0320 23:46:37.570] message:Error from server (NotFound): pods "abc" not found
I0320 23:46:37.570] has:pods "abc" not found
I0320 23:46:37.698] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:37.837] (BSuccessful
I0320 23:46:37.838] message:{
I0320 23:46:37.838]     "apiVersion": "v1",
I0320 23:46:37.838]     "items": [],
... skipping 25 lines ...
I0320 23:46:38.452] has not:No resources found
I0320 23:46:38.595] Successful
I0320 23:46:38.598] message:NAME
I0320 23:46:38.598] has not:No resources found
I0320 23:46:38.747] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:38.935] (BSuccessful
I0320 23:46:38.935] message:error: the server doesn't have a resource type "foobar"
I0320 23:46:38.936] has not:No resources found
W0320 23:46:39.043] I0320 23:46:38.881192   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:39.044] I0320 23:46:38.881694   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:46:39.144] Successful
I0320 23:46:39.145] message:No resources found.
I0320 23:46:39.145] has:No resources found
... skipping 2 lines ...
I0320 23:46:39.246] has not:No resources found
I0320 23:46:39.409] Successful
I0320 23:46:39.410] message:No resources found.
I0320 23:46:39.410] has:No resources found
I0320 23:46:39.581] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:39.727] (BSuccessful
I0320 23:46:39.728] message:Error from server (NotFound): pods "abc" not found
I0320 23:46:39.728] has:pods "abc" not found
I0320 23:46:39.737] FAIL!
I0320 23:46:39.738] message:Error from server (NotFound): pods "abc" not found
I0320 23:46:39.738] has not:List
I0320 23:46:39.738] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
W0320 23:46:39.919] I0320 23:46:39.919086   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:39.920] I0320 23:46:39.919212   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:46:40.020] Successful
I0320 23:46:40.021] message:I0320 23:46:39.855028   59785 loader.go:359] Config loaded from file /tmp/tmp.PVHQcKACOc/.kube/config
... skipping 718 lines ...
I0320 23:46:44.512] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0320 23:46:44.884] (B<no value>Successful
I0320 23:46:44.885] message:valid-pod:
I0320 23:46:44.885] has:valid-pod:
W0320 23:46:44.986] I0320 23:46:44.922858   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:46:44.986] I0320 23:46:44.923640   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:46:45.081] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0320 23:46:45.182] Successful
I0320 23:46:45.182] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0320 23:46:45.183] 	template was:
I0320 23:46:45.183] 		{.missing}
I0320 23:46:45.183] 	object given to jsonpath engine was:
I0320 23:46:45.185] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-20T23:46:44Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-20T23:46:44Z"}}, "name":"valid-pod", "namespace":"namespace-1553125603-19456", "resourceVersion":"730", "selfLink":"/api/v1/namespaces/namespace-1553125603-19456/pods/valid-pod", "uid":"6acf338f-4b6a-11e9-bc6f-0242ac110002"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0320 23:46:45.185] has:missing is not found
I0320 23:46:45.185] Successful
I0320 23:46:45.185] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0320 23:46:45.186] 	template was:
I0320 23:46:45.186] 		{{.missing}}
I0320 23:46:45.186] 	raw data was:
I0320 23:46:45.187] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-20T23:46:44Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-20T23:46:44Z"}],"name":"valid-pod","namespace":"namespace-1553125603-19456","resourceVersion":"730","selfLink":"/api/v1/namespaces/namespace-1553125603-19456/pods/valid-pod","uid":"6acf338f-4b6a-11e9-bc6f-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0320 23:46:45.187] 	object given to template engine was:
I0320 23:46:45.188] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-20T23:46:44Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-20T23:46:44Z]] name:valid-pod namespace:namespace-1553125603-19456 resourceVersion:730 selfLink:/api/v1/namespaces/namespace-1553125603-19456/pods/valid-pod uid:6acf338f-4b6a-11e9-bc6f-0242ac110002] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 167 lines ...
I0320 23:46:49.093]   terminationGracePeriodSeconds: 30
I0320 23:46:49.093] status:
I0320 23:46:49.093]   phase: Pending
I0320 23:46:49.093]   qosClass: Guaranteed
I0320 23:46:49.093] has:name: valid-pod
I0320 23:46:49.093] Successful
I0320 23:46:49.094] message:Error from server (NotFound): pods "invalid-pod" not found
I0320 23:46:49.094] has:"invalid-pod" not found
I0320 23:46:49.112] pod "valid-pod" deleted
I0320 23:46:49.239] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:46:49.481] (Bpod/redis-master created
I0320 23:46:49.489] pod/valid-pod created
I0320 23:46:49.597] Successful
... skipping 295 lines ...
I0320 23:46:57.181] Running command: run_create_secret_tests
I0320 23:46:57.210] 
I0320 23:46:57.214] +++ Running case: test-cmd.run_create_secret_tests 
I0320 23:46:57.218] +++ working dir: /go/src/k8s.io/kubernetes
I0320 23:46:57.222] +++ command: run_create_secret_tests
I0320 23:46:57.345] Successful
I0320 23:46:57.346] message:Error from server (NotFound): secrets "mysecret" not found
I0320 23:46:57.346] has:secrets "mysecret" not found
I0320 23:46:57.567] Successful
I0320 23:46:57.568] message:Error from server (NotFound): secrets "mysecret" not found
I0320 23:46:57.568] has:secrets "mysecret" not found
I0320 23:46:57.570] Successful
I0320 23:46:57.571] message:user-specified
I0320 23:46:57.571] has:user-specified
I0320 23:46:57.672] Successful
I0320 23:46:57.771] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"72e3f190-4b6a-11e9-bc6f-0242ac110002","resourceVersion":"838","creationTimestamp":"2019-03-20T23:46:57Z"}}
... skipping 180 lines ...
I0320 23:47:02.579] has:Timeout exceeded while reading body
I0320 23:47:02.684] Successful
I0320 23:47:02.685] message:NAME        READY   STATUS    RESTARTS   AGE
I0320 23:47:02.685] valid-pod   0/1     Pending   0          1s
I0320 23:47:02.685] has:valid-pod
I0320 23:47:02.784] Successful
I0320 23:47:02.785] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0320 23:47:02.785] has:Invalid timeout value
I0320 23:47:02.887] pod "valid-pod" deleted
I0320 23:47:02.919] +++ exit code: 0
I0320 23:47:02.969] Recording: run_crd_tests
I0320 23:47:02.969] Running command: run_crd_tests
I0320 23:47:02.998] 
... skipping 71 lines ...
W0320 23:47:08.477] I0320 23:47:07.657403   49511 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W0320 23:47:08.569] I0320 23:47:07.659449   46595 client.go:352] parsed scheme: ""
W0320 23:47:08.570] I0320 23:47:07.659490   46595 client.go:352] scheme "" not registered, fallback to default scheme
W0320 23:47:08.570] I0320 23:47:07.659539   46595 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0320 23:47:08.571] I0320 23:47:07.659638   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:08.571] I0320 23:47:07.660009   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:08.572] E0320 23:47:07.681839   49511 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"]
W0320 23:47:08.572] I0320 23:47:07.795550   46595 trace.go:81] Trace[1846071183]: "List /apis/company.com/v1/namespaces/namespace-1553125624-21656/foos" (started: 2019-03-20 23:47:07.070709915 +0000 UTC m=+207.368629926) (total time: 724.800675ms):
W0320 23:47:08.572] Trace[1846071183]: [724.453333ms] [724.397585ms] Listing from storage done
W0320 23:47:08.572] I0320 23:47:07.857754   49511 controller_utils.go:1034] Caches are synced for garbage collector controller
W0320 23:47:08.572] I0320 23:47:07.936520   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:47:08.573] I0320 23:47:07.936764   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:47:08.673] NAME   AGE
... skipping 181 lines ...
I0320 23:47:10.593] (Bfoo.company.com/test patched
I0320 23:47:10.721] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0320 23:47:10.832] (Bfoo.company.com/test patched
W0320 23:47:10.938] I0320 23:47:10.938006   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:47:10.939] I0320 23:47:10.938208   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:47:11.039] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0320 23:47:11.200] (B+++ [0320 23:47:11] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0320 23:47:11.289] {
I0320 23:47:11.289]     "apiVersion": "company.com/v1",
I0320 23:47:11.290]     "kind": "Foo",
I0320 23:47:11.290]     "metadata": {
I0320 23:47:11.290]         "annotations": {
I0320 23:47:11.291]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 352 lines ...
W0320 23:47:26.950] I0320 23:47:26.950424   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:47:26.951] I0320 23:47:26.950673   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:47:27.951] I0320 23:47:27.950867   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:47:27.952] I0320 23:47:27.951039   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:47:28.140] crd.sh:459: Successful get bars {{len .items}}: 0
I0320 23:47:28.449] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0320 23:47:28.549] Error from server (NotFound): namespaces "non-native-resources" not found
I0320 23:47:28.650] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0320 23:47:28.834] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
W0320 23:47:28.952] I0320 23:47:28.951270   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:47:28.952] I0320 23:47:28.951571   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:47:29.053] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0320 23:47:29.088] +++ exit code: 0
... skipping 14 lines ...
I0320 23:47:30.234] has:deployment.apps/test1 created
W0320 23:47:30.335] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0320 23:47:30.335] I0320 23:47:30.208163   49511 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1553125649-26404", Name:"test1", UID:"8637593b-4b6a-11e9-bc6f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"992", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-848d5d4b47 to 1
W0320 23:47:30.336] I0320 23:47:30.236270   49511 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553125649-26404", Name:"test1-848d5d4b47", UID:"8638b84f-4b6a-11e9-bc6f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"993", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-tzqdq
I0320 23:47:30.436] deployment.extensions "test1" deleted
I0320 23:47:30.536] Successful
I0320 23:47:30.537] message:error: Invalid image name "InvalidImageName": invalid reference format
I0320 23:47:30.537] has:error: Invalid image name "InvalidImageName": invalid reference format
I0320 23:47:30.561] +++ exit code: 0
I0320 23:47:30.634] +++ [0320 23:47:30] Testing recursive resources
I0320 23:47:30.647] +++ [0320 23:47:30] Creating namespace namespace-1553125650-10701
I0320 23:47:30.771] namespace/namespace-1553125650-10701 created
I0320 23:47:30.890] Context "test" modified.
W0320 23:47:30.991] I0320 23:47:30.952745   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:47:30.992] I0320 23:47:30.953536   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:47:31.092] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:47:31.514] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0320 23:47:31.518] (BSuccessful
I0320 23:47:31.518] message:pod/busybox0 created
I0320 23:47:31.518] pod/busybox1 created
I0320 23:47:31.519] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0320 23:47:31.519] has:error validating data: kind not set
I0320 23:47:31.670] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0320 23:47:31.961] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0320 23:47:31.967] (BSuccessful
I0320 23:47:31.969] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0320 23:47:31.970] has:Object 'Kind' is missing
W0320 23:47:32.071] I0320 23:47:31.953684   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:47:32.072] I0320 23:47:31.954024   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:47:32.172] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0320 23:47:32.683] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0320 23:47:32.688] (BSuccessful
I0320 23:47:32.689] message:pod/busybox0 replaced
I0320 23:47:32.689] pod/busybox1 replaced
I0320 23:47:32.689] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0320 23:47:32.689] has:error validating data: kind not set
I0320 23:47:32.847] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0320 23:47:33.016] (BSuccessful
I0320 23:47:33.017] message:Name:               busybox0
I0320 23:47:33.017] Namespace:          namespace-1553125650-10701
I0320 23:47:33.017] Priority:           0
I0320 23:47:33.017] PriorityClassName:  <none>
... skipping 162 lines ...
W0320 23:47:33.143] I0320 23:47:32.958444   49511 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0320 23:47:33.244] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0320 23:47:33.494] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0320 23:47:33.497] (BSuccessful
I0320 23:47:33.498] message:pod/busybox0 annotated
I0320 23:47:33.498] pod/busybox1 annotated
I0320 23:47:33.498] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0320 23:47:33.499] has:Object 'Kind' is missing
I0320 23:47:33.658] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0320 23:47:34.162] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0320 23:47:34.165] (BSuccessful
I0320 23:47:34.166] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0320 23:47:34.166] pod/busybox0 configured
I0320 23:47:34.166] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0320 23:47:34.167] pod/busybox1 configured
I0320 23:47:34.167] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0320 23:47:34.167] has:error validating data: kind not set
W0320 23:47:34.268] I0320 23:47:33.954523   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0320 23:47:34.268] I0320 23:47:33.954739   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0320 23:47:34.369] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0320 23:47:34.586] (Bdeployment.apps/nginx created
W0320 23:47:34.687] I0320 23:47:34.596515   49511 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1553125650-10701", Name:"nginx", UID:"88d56225-4b6a-11e9-bc6f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5f7cff5b56 to 3
W0320 23:47:34.688] I0320 23:47:34.602880   49511 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553125650-10701", Name:"nginx-5f7cff5b56", UID:"88d6c2df-4b6a-11e9-bc6f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-zm9jw
... skipping 49 lines ...
W0320 23:47:35.299] I0320 23:47:34.955171   46595 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0320 23:47:35.299] kubectl convert is DEPRECATED and will be removed in a future version.
W0320 23:47:35.300] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0320 23:47:35.400] deployment.extensions "nginx" deleted
I0320 23:47:35.480] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: busybox0:busybox1:, got: busybox0:busybox1:nginx-5f7cff5b56-4flqh:nginx-5f7cff5b56-n89tf:nginx-5f7cff5b56-zm9jw:
I0320 23:47:35.484] 
I0320 23:47:35.489] generic-resources.sh:280: FAIL!
I0320 23:47:35.490] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I0320 23:47:35.490]   Expected: busybox0:busybox1:
I0320 23:47:35.490]   Got:      busybox0:busybox1:nginx-5f7cff5b56-4flqh:nginx-5f7cff5b56-n89tf:nginx-5f7cff5b56-zm9jw:
I0320 23:47:35.490] (B
I0320 23:47:35.490] 51 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I0320 23:47:35.490] (B
... skipping 71 lines ...
W0320 23:47:35.605] I0320 23:47:35.528566   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.605] I0320 23:47:35.528569   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.605] I0320 23:47:35.528546   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.605] I0320 23:47:35.528581   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.605] I0320 23:47:35.528570   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.606] I0320 23:47:35.528595   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.606] W0320 23:47:35.528880   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.606] W0320 23:47:35.528916   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.607] W0320 23:47:35.528944   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.607] W0320 23:47:35.528977   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.607] W0320 23:47:35.529027   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.607] W0320 23:47:35.529065   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.608] I0320 23:47:35.529075   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.608] W0320 23:47:35.529085   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.608] I0320 23:47:35.529090   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.608] W0320 23:47:35.529101   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.609] W0320 23:47:35.529151   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.609] W0320 23:47:35.529205   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.609] W0320 23:47:35.529217   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.609] W0320 23:47:35.529240   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.610] W0320 23:47:35.529253   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.610] W0320 23:47:35.529295   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.610] I0320 23:47:35.529311   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.610] W0320 23:47:35.529334   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.611] I0320 23:47:35.529340   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.611] W0320 23:47:35.529341   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.611] W0320 23:47:35.529374   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.612] W0320 23:47:35.529401   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.612] W0320 23:47:35.529419   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.612] W0320 23:47:35.529432   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.613] W0320 23:47:35.529470   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.613] W0320 23:47:35.529299   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.613] I0320 23:47:35.529378   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.613] W0320 23:47:35.529519   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.614] W0320 23:47:35.529342   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.614] W0320 23:47:35.529526   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.614] W0320 23:47:35.529552   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.615] W0320 23:47:35.529571   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.615] W0320 23:47:35.529622   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.615] W0320 23:47:35.529548   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.615] I0320 23:47:35.529127   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.616] I0320 23:47:35.529724   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.616] W0320 23:47:35.529724   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.616] I0320 23:47:35.529731   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.616] I0320 23:47:35.529523   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.617] I0320 23:47:35.529744   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.617] W0320 23:47:35.529472   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.617] I0320 23:47:35.529785   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.617] I0320 23:47:35.529796   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.617] I0320 23:47:35.529893   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.618] I0320 23:47:35.529915   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.618] W0320 23:47:35.529992   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.618] I0320 23:47:35.530133   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.618] I0320 23:47:35.530160   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.618] I0320 23:47:35.530179   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.619] I0320 23:47:35.530193   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.619] I0320 23:47:35.530296   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.619] I0320 23:47:35.530525   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.619] I0320 23:47:35.530689   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.619] I0320 23:47:35.531733   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.620] I0320 23:47:35.531794   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.620] I0320 23:47:35.531807   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.620] I0320 23:47:35.531808   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.620] I0320 23:47:35.531824   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.620] W0320 23:47:35.531828   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.621] I0320 23:47:35.530724   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.621] I0320 23:47:35.531861   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.621] I0320 23:47:35.531835   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.621] W0320 23:47:35.531877   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.621] I0320 23:47:35.530354   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.622] I0320 23:47:35.531897   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.622] I0320 23:47:35.530754   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.622] I0320 23:47:35.531913   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.622] W0320 23:47:35.531907   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.623] W0320 23:47:35.531945   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.623] W0320 23:47:35.531967   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.623] W0320 23:47:35.531982   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.624] W0320 23:47:35.531991   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.624] W0320 23:47:35.532016   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.625] W0320 23:47:35.532025   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.625] W0320 23:47:35.532018   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.625] W0320 23:47:35.531990   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.625] I0320 23:47:35.530798   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.626] W0320 23:47:35.532073   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.626] I0320 23:47:35.532078   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.626] W0320 23:47:35.532084   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.626] I0320 23:47:35.530814   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.627] I0320 23:47:35.532099   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.627] I0320 23:47:35.530837   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.627] I0320 23:47:35.532114   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.627] W0320 23:47:35.532116   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.628] W0320 23:47:35.532122   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.628] I0320 23:47:35.530845   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.628] I0320 23:47:35.530786   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.628] I0320 23:47:35.532138   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.628] I0320 23:47:35.530872   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.628] I0320 23:47:35.532148   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.629] I0320 23:47:35.532155   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.629] W0320 23:47:35.532154   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.629] I0320 23:47:35.531055   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.629] W0320 23:47:35.532160   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.630] I0320 23:47:35.532174   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.630] I0320 23:47:35.530892   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.630] I0320 23:47:35.531084   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.630] W0320 23:47:35.532181   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.630] I0320 23:47:35.532192   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.631] I0320 23:47:35.530325   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.631] I0320 23:47:35.531289   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.631] W0320 23:47:35.532225   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.631] W0320 23:47:35.532074   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.632] W0320 23:47:35.532213   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.632] I0320 23:47:35.531303   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.632] I0320 23:47:35.532263   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.632] I0320 23:47:35.531325   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.632] W0320 23:47:35.532276   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.633] I0320 23:47:35.532283   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.633] W0320 23:47:35.532286   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.633] I0320 23:47:35.531367   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
W0320 23:47:35.633] W0320 23:47:35.532291   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.634] I0320 23:47:35.532227   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.634] I0320 23:47:35.531478   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.634] W0320 23:47:35.532314   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.634] I0320 23:47:35.532321   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.634] I0320 23:47:35.531505   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.635] W0320 23:47:35.532335   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.635] I0320 23:47:35.532344   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.635] I0320 23:47:35.531518   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.635] W0320 23:47:35.532359   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.636] W0320 23:47:35.532360   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.636] I0320 23:47:35.532360   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.636] I0320 23:47:35.531552   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.636] I0320 23:47:35.532394   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.637] W0320 23:47:35.532398   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.637] I0320 23:47:35.531555   46595 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0320 23:47:35.638] W0320 23:47:35.532402   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.638] W0320 23:47:35.532398   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.638] I0320 23:47:35.531598   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.638] W0320 23:47:35.532436   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.639] I0320 23:47:35.532436   46595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:47:35.639] E0320 23:47:35.531649   46595 controller.go:179] rpc error: code = Unavailable desc = transport is closing
W0320 23:47:35.639] W0320 23:47:35.532448   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.639] W0320 23:47:35.532475   46595 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0320 23:47:35.678] make: *** [test-cmd] Error 1
I0320 23:47:35.779] junit report dir: /workspace/artifacts
I0320 23:47:35.779] +++ [0320 23:47:35] Clean up complete
I0320 23:47:35.779] Makefile:298: recipe for target 'test-cmd' failed
W0320 23:48:13.943] Traceback (most recent call last):
W0320 23:48:13.944]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0320 23:48:13.944]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0320 23:48:13.944]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0320 23:48:13.944]     check(*cmd)
W0320 23:48:13.944]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0320 23:48:13.946]     subprocess.check_call(cmd)
W0320 23:48:13.947]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0320 23:48:13.969]     raise CalledProcessError(retcode, cmd)
W0320 23:48:13.969] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.14-v20190318-2ac98e338', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0320 23:48:13.977] Command failed
I0320 23:48:13.978] process 669 exited with code 1 after 14.2m
E0320 23:48:13.978] FAIL: pull-kubernetes-integration
I0320 23:48:13.978] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0320 23:48:14.660] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0320 23:48:14.738] process 65903 exited with code 0 after 0.0m
I0320 23:48:14.738] Call:  gcloud config get-value account
I0320 23:48:15.255] process 65915 exited with code 0 after 0.0m
I0320 23:48:15.256] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0320 23:48:15.256] Upload result and artifacts...
I0320 23:48:15.256] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/75528/pull-kubernetes-integration/49170
I0320 23:48:15.257] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/75528/pull-kubernetes-integration/49170/artifacts
W0320 23:48:16.968] CommandException: One or more URLs matched no objects.
E0320 23:48:17.163] Command failed
I0320 23:48:17.163] process 65927 exited with code 1 after 0.0m
W0320 23:48:17.163] Remote dir gs://kubernetes-jenkins/pr-logs/pull/75528/pull-kubernetes-integration/49170/artifacts not exist yet
I0320 23:48:17.163] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/75528/pull-kubernetes-integration/49170/artifacts
I0320 23:48:20.212] process 66069 exited with code 0 after 0.1m
W0320 23:48:20.213] metadata path /workspace/_artifacts/metadata.json does not exist
W0320 23:48:20.213] metadata not found or invalid, init with empty metadata
... skipping 22 lines ...