This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 86 succeeded
Started2019-03-19 21:59
Elapsed13m12s
Revision
Buildergke-prow-containerd-pool-99179761-cnsm
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ecfae2e6-7c65-400f-b9f5-b98487a759d5/targets/test'}}
pod3bcd3abf-4a92-11e9-ab9f-0a580a6c0a8e
resultstorehttps://source.cloud.google.com/results/invocations/ecfae2e6-7c65-400f-b9f5-b98487a759d5/targets/test
infra-commit0cb02061a
pod3bcd3abf-4a92-11e9-ab9f-0a580a6c0a8e
repok8s.io/kubernetes
repo-commitac16ac7cbe11585a53f70057d05a6212952b5051
repos{u'k8s.io/kubernetes': u'master'}

No Test Failures!


Show 86 Passed Tests

Error lines from build-log.txt

... skipping 300 lines ...
W0319 22:09:37.462] I0319 22:09:37.461437   46429 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0319 22:09:37.462] I0319 22:09:37.461539   46429 server.go:559] external host was not specified, using 172.17.0.2
W0319 22:09:37.463] W0319 22:09:37.461557   46429 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0319 22:09:37.463] I0319 22:09:37.461793   46429 server.go:146] Version: v1.15.0-alpha.0.1279+ac16ac7cbe1158
W0319 22:09:37.976] I0319 22:09:37.975180   46429 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0319 22:09:37.976] I0319 22:09:37.975228   46429 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0319 22:09:37.976] E0319 22:09:37.975895   46429 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:37.977] E0319 22:09:37.975955   46429 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:37.977] E0319 22:09:37.976005   46429 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:37.977] E0319 22:09:37.976055   46429 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:37.978] E0319 22:09:37.976097   46429 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:37.978] E0319 22:09:37.976159   46429 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:37.978] I0319 22:09:37.976185   46429 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0319 22:09:37.978] I0319 22:09:37.976192   46429 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0319 22:09:37.979] I0319 22:09:37.978640   46429 client.go:352] parsed scheme: ""
W0319 22:09:37.979] I0319 22:09:37.978682   46429 client.go:352] scheme "" not registered, fallback to default scheme
W0319 22:09:37.979] I0319 22:09:37.978805   46429 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0319 22:09:37.979] I0319 22:09:37.979085   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0319 22:09:38.624] W0319 22:09:38.623362   46429 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0319 22:09:38.974] I0319 22:09:38.973924   46429 client.go:352] parsed scheme: ""
W0319 22:09:38.975] I0319 22:09:38.973982   46429 client.go:352] scheme "" not registered, fallback to default scheme
W0319 22:09:38.975] I0319 22:09:38.974041   46429 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0319 22:09:38.975] I0319 22:09:38.974351   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:09:38.975] I0319 22:09:38.975002   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:09:39.542] E0319 22:09:39.541432   46429 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:39.542] E0319 22:09:39.541544   46429 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:39.543] E0319 22:09:39.541597   46429 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:39.543] E0319 22:09:39.541650   46429 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:39.543] E0319 22:09:39.541667   46429 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:39.544] E0319 22:09:39.541691   46429 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0319 22:09:39.544] I0319 22:09:39.541725   46429 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0319 22:09:39.544] I0319 22:09:39.541746   46429 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0319 22:09:39.544] I0319 22:09:39.543225   46429 client.go:352] parsed scheme: ""
W0319 22:09:39.544] I0319 22:09:39.543283   46429 client.go:352] scheme "" not registered, fallback to default scheme
W0319 22:09:39.545] I0319 22:09:39.543329   46429 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0319 22:09:39.545] I0319 22:09:39.543379   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 274 lines ...
W0319 22:10:24.465] I0319 22:10:24.461418   49333 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
W0319 22:10:24.465] I0319 22:10:24.461484   49333 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
W0319 22:10:24.465] I0319 22:10:24.461535   49333 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0319 22:10:24.466] I0319 22:10:24.461686   49333 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
W0319 22:10:24.466] I0319 22:10:24.461732   49333 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
W0319 22:10:24.466] I0319 22:10:24.461770   49333 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W0319 22:10:24.466] E0319 22:10:24.461815   49333 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0319 22:10:24.466] I0319 22:10:24.461864   49333 controllermanager.go:497] Started "resourcequota"
W0319 22:10:24.467] I0319 22:10:24.462004   49333 resource_quota_controller.go:276] Starting resource quota controller
W0319 22:10:24.467] I0319 22:10:24.462071   49333 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W0319 22:10:24.467] I0319 22:10:24.462139   49333 resource_quota_monitor.go:301] QuotaMonitor running
W0319 22:10:24.467] I0319 22:10:24.462369   49333 controllermanager.go:497] Started "csrcleaner"
W0319 22:10:24.468] I0319 22:10:24.462395   49333 cleaner.go:81] Starting CSR cleaner controller
... skipping 18 lines ...
W0319 22:10:24.960] I0319 22:10:24.885276   49333 namespace_controller.go:186] Starting namespace controller
W0319 22:10:24.961] I0319 22:10:24.885311   49333 controller_utils.go:1027] Waiting for caches to sync for namespace controller
W0319 22:10:24.961] I0319 22:10:24.885592   49333 controllermanager.go:497] Started "cronjob"
W0319 22:10:24.961] W0319 22:10:24.885616   49333 controllermanager.go:476] "tokencleaner" is disabled
W0319 22:10:24.961] I0319 22:10:24.885680   49333 cronjob_controller.go:94] Starting CronJob Manager
W0319 22:10:24.961] I0319 22:10:24.887072   49333 node_lifecycle_controller.go:77] Sending events to api server
W0319 22:10:24.961] E0319 22:10:24.887180   49333 core.go:161] failed to start cloud node lifecycle controller: no cloud provider provided
W0319 22:10:24.961] W0319 22:10:24.887214   49333 controllermanager.go:489] Skipping "cloud-node-lifecycle"
W0319 22:10:24.962] I0319 22:10:24.887831   49333 controllermanager.go:497] Started "clusterrole-aggregation"
W0319 22:10:24.962] I0319 22:10:24.887994   49333 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0319 22:10:24.962] I0319 22:10:24.888016   49333 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
W0319 22:10:24.962] I0319 22:10:24.888521   49333 controllermanager.go:497] Started "serviceaccount"
W0319 22:10:24.962] I0319 22:10:24.888955   49333 serviceaccounts_controller.go:115] Starting service account controller
... skipping 10 lines ...
W0319 22:10:24.963] I0319 22:10:24.892135   49333 node_lifecycle_controller.go:292] Sending events to api server.
W0319 22:10:24.963] I0319 22:10:24.892421   49333 node_lifecycle_controller.go:325] Controller is using taint based evictions.
W0319 22:10:24.963] I0319 22:10:24.892472   49333 taint_manager.go:175] Sending events to api server.
W0319 22:10:24.963] I0319 22:10:24.892703   49333 node_lifecycle_controller.go:390] Controller will reconcile labels.
W0319 22:10:24.964] I0319 22:10:24.892725   49333 node_lifecycle_controller.go:403] Controller will taint node by condition.
W0319 22:10:24.964] I0319 22:10:24.892751   49333 controllermanager.go:497] Started "nodelifecycle"
W0319 22:10:24.964] E0319 22:10:24.893232   49333 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0319 22:10:24.964] W0319 22:10:24.893274   49333 controllermanager.go:489] Skipping "service"
W0319 22:10:24.964] W0319 22:10:24.893282   49333 controllermanager.go:489] Skipping "ttl-after-finished"
W0319 22:10:24.964] I0319 22:10:24.893713   49333 controllermanager.go:497] Started "statefulset"
W0319 22:10:24.964] I0319 22:10:24.894149   49333 controllermanager.go:497] Started "persistentvolume-expander"
W0319 22:10:24.964] I0319 22:10:24.895539   49333 controllermanager.go:497] Started "replicationcontroller"
W0319 22:10:24.964] I0319 22:10:24.895942   49333 controllermanager.go:497] Started "job"
... skipping 33 lines ...
W0319 22:10:24.989] I0319 22:10:24.989170   49333 controller_utils.go:1034] Caches are synced for service account controller
W0319 22:10:24.991] I0319 22:10:24.991177   49333 controller_utils.go:1034] Caches are synced for HPA controller
W0319 22:10:24.993] I0319 22:10:24.992756   46429 controller.go:606] quota admission added evaluator for: serviceaccounts
W0319 22:10:24.999] I0319 22:10:24.999277   49333 controller_utils.go:1034] Caches are synced for expand controller
W0319 22:10:25.017] I0319 22:10:25.017293   49333 controller_utils.go:1034] Caches are synced for certificate controller
W0319 22:10:25.018] I0319 22:10:25.017474   49333 controller_utils.go:1034] Caches are synced for ReplicationController controller
W0319 22:10:25.019] E0319 22:10:25.018025   49333 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0319 22:10:25.019] E0319 22:10:25.018074   49333 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0319 22:10:25.019] I0319 22:10:25.018807   49333 controller_utils.go:1034] Caches are synced for PVC protection controller
W0319 22:10:25.019] I0319 22:10:25.019212   49333 controller_utils.go:1034] Caches are synced for deployment controller
W0319 22:10:25.057] I0319 22:10:25.056491   49333 controller_utils.go:1034] Caches are synced for endpoint controller
W0319 22:10:25.258] I0319 22:10:25.258096   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:10:25.259] I0319 22:10:25.258361   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:10:25.317] I0319 22:10:25.316975   49333 controller_utils.go:1034] Caches are synced for job controller
W0319 22:10:25.417] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0319 22:10:25.454] W0319 22:10:25.454117   49333 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0319 22:10:25.473] I0319 22:10:25.472650   49333 controller_utils.go:1034] Caches are synced for daemon sets controller
W0319 22:10:25.492] I0319 22:10:25.492206   49333 controller_utils.go:1034] Caches are synced for TTL controller
W0319 22:10:25.516] I0319 22:10:25.516287   49333 controller_utils.go:1034] Caches are synced for attach detach controller
W0319 22:10:25.524] I0319 22:10:25.523813   49333 controller_utils.go:1034] Caches are synced for taint controller
W0319 22:10:25.525] I0319 22:10:25.524726   49333 node_lifecycle_controller.go:1159] Initializing eviction metric for zone: 
W0319 22:10:25.525] I0319 22:10:25.525204   49333 node_lifecycle_controller.go:1009] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
... skipping 26 lines ...
W0319 22:10:26.012] I0319 22:10:25.792192   49333 controller_utils.go:1034] Caches are synced for disruption controller
W0319 22:10:26.012] I0319 22:10:25.792300   49333 disruption.go:294] Sending events to api server.
W0319 22:10:26.013] I0319 22:10:25.799379   49333 controller_utils.go:1034] Caches are synced for stateful set controller
I0319 22:10:26.113] Successful: the flag '--client' shows correct client info
I0319 22:10:26.114] (BSuccessful: the flag '--client' correctly has no server version info
I0319 22:10:26.117] (B+++ [0319 22:10:26] Testing kubectl version: verify json output
W0319 22:10:26.217] E0319 22:10:26.159692   49333 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0319 22:10:26.259] I0319 22:10:26.258669   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:10:26.260] I0319 22:10:26.258981   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:10:26.360] Successful: --output json has correct client info
I0319 22:10:26.361] (BSuccessful: --output json has correct server info
I0319 22:10:26.361] (B+++ [0319 22:10:26] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
W0319 22:10:26.461] I0319 22:10:26.365037   49333 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
... skipping 58 lines ...
I0319 22:10:29.806] +++ working dir: /go/src/k8s.io/kubernetes
I0319 22:10:29.808] +++ command: run_RESTMapper_evaluation_tests
I0319 22:10:29.821] +++ [0319 22:10:29] Creating namespace namespace-1553033429-3640
I0319 22:10:29.904] namespace/namespace-1553033429-3640 created
I0319 22:10:29.992] Context "test" modified.
I0319 22:10:30.002] +++ [0319 22:10:30] Testing RESTMapper
I0319 22:10:30.124] +++ [0319 22:10:30] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0319 22:10:30.142] +++ exit code: 0
W0319 22:10:30.261] I0319 22:10:30.260748   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:10:30.262] I0319 22:10:30.260976   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:10:30.362] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0319 22:10:30.362] bindings                                                                      true         Binding
I0319 22:10:30.363] componentstatuses                 cs                                          false        ComponentStatus
... skipping 685 lines ...
I0319 22:10:52.939] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0319 22:10:53.156] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0319 22:10:53.288] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0319 22:10:53.507] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0319 22:10:53.628] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0319 22:10:53.737] (Bpod "valid-pod" force deleted
W0319 22:10:53.837] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0319 22:10:53.838] I0319 22:10:53.281026   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:10:53.838] I0319 22:10:53.281318   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:10:53.839] error: setting 'all' parameter but found a non empty selector. 
W0319 22:10:53.839] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0319 22:10:53.941] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0319 22:10:53.983] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0319 22:10:54.079] (Bnamespace/test-kubectl-describe-pod created
I0319 22:10:54.212] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0319 22:10:54.336] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
I0319 22:10:55.607] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0319 22:10:55.737] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0319 22:10:55.828] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0319 22:10:55.947] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0319 22:10:56.150] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:10:56.400] (Bpod/env-test-pod created
W0319 22:10:56.501] error: min-available and max-unavailable cannot be both specified
W0319 22:10:56.501] I0319 22:10:56.284542   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:10:56.502] I0319 22:10:56.284769   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:10:56.679] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0319 22:10:56.679] Name:               env-test-pod
I0319 22:10:56.679] Namespace:          test-kubectl-describe-pod
I0319 22:10:56.679] Priority:           0
... skipping 177 lines ...
W0319 22:11:11.295] I0319 22:11:11.294745   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:11.296] I0319 22:11:11.294961   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:11:11.396] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:11:11.530] (Bpod/valid-pod created
I0319 22:11:11.662] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0319 22:11:11.859] (BSuccessful
I0319 22:11:11.859] message:Error from server: cannot restore map from string
I0319 22:11:11.859] has:cannot restore map from string
W0319 22:11:11.960] E0319 22:11:11.844521   46429 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0319 22:11:12.060] Successful
I0319 22:11:12.061] message:pod/valid-pod patched (no change)
I0319 22:11:12.061] has:patched (no change)
I0319 22:11:12.072] pod/valid-pod patched
I0319 22:11:12.198] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0319 22:11:12.317] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 8 lines ...
I0319 22:11:13.082] (Bpod/valid-pod patched
I0319 22:11:13.210] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0319 22:11:13.418] (Bpod/valid-pod patched
W0319 22:11:13.519] I0319 22:11:13.295784   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:13.519] I0319 22:11:13.296427   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:11:13.620] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0319 22:11:13.775] (B+++ [0319 22:11:13] "kubectl patch with resourceVersion 508" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0319 22:11:14.090] pod "valid-pod" deleted
I0319 22:11:14.107] pod/valid-pod replaced
I0319 22:11:14.247] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0319 22:11:14.485] (BSuccessful
I0319 22:11:14.486] message:error: --grace-period must have --force specified
I0319 22:11:14.486] has:\-\-grace-period must have \-\-force specified
W0319 22:11:14.586] I0319 22:11:14.296789   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:14.587] I0319 22:11:14.297011   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:11:14.722] Successful
I0319 22:11:14.723] message:error: --timeout must have --force specified
I0319 22:11:14.723] has:\-\-timeout must have \-\-force specified
W0319 22:11:14.974] W0319 22:11:14.973638   49333 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0319 22:11:15.075] node/node-v1-test created
I0319 22:11:15.256] node/node-v1-test replaced
W0319 22:11:15.357] I0319 22:11:15.298414   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:15.357] I0319 22:11:15.298656   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:11:15.458] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0319 22:11:15.522] (Bnode "node-v1-test" deleted
... skipping 23 lines ...
I0319 22:11:18.033]     name: kubernetes-pause
I0319 22:11:18.033] has:localonlyvalue
I0319 22:11:18.114] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0319 22:11:18.370] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0319 22:11:18.499] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0319 22:11:18.619] (Bpod/valid-pod labeled
W0319 22:11:18.719] error: 'name' already has a value (valid-pod), and --overwrite is false
W0319 22:11:18.720] I0319 22:11:18.300325   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:18.720] I0319 22:11:18.300826   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:11:18.821] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0319 22:11:18.905] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0319 22:11:19.020] (Bpod "valid-pod" force deleted
W0319 22:11:19.121] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 104 lines ...
I0319 22:11:29.192] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0319 22:11:29.197] +++ working dir: /go/src/k8s.io/kubernetes
I0319 22:11:29.199] +++ command: run_kubectl_create_error_tests
I0319 22:11:29.220] +++ [0319 22:11:29] Creating namespace namespace-1553033489-6280
I0319 22:11:29.311] namespace/namespace-1553033489-6280 created
I0319 22:11:29.405] Context "test" modified.
I0319 22:11:29.416] +++ [0319 22:11:29] Testing kubectl create with error
W0319 22:11:29.517] I0319 22:11:29.313874   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:29.517] I0319 22:11:29.314051   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:11:29.517] Error: must specify one of -f and -k
W0319 22:11:29.517] 
W0319 22:11:29.517] Create a resource from a file or from stdin.
W0319 22:11:29.517] 
W0319 22:11:29.518]  JSON and YAML formats are accepted.
W0319 22:11:29.518] 
W0319 22:11:29.518] Examples:
... skipping 41 lines ...
W0319 22:11:29.522] 
W0319 22:11:29.522] Usage:
W0319 22:11:29.522]   kubectl create -f FILENAME [options]
W0319 22:11:29.522] 
W0319 22:11:29.522] Use "kubectl <command> --help" for more information about a given command.
W0319 22:11:29.522] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0319 22:11:29.741] +++ [0319 22:11:29] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0319 22:11:29.842] kubectl convert is DEPRECATED and will be removed in a future version.
W0319 22:11:29.842] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0319 22:11:29.979] +++ exit code: 0
I0319 22:11:30.042] Recording: run_kubectl_apply_tests
I0319 22:11:30.042] Running command: run_kubectl_apply_tests
I0319 22:11:30.078] 
... skipping 26 lines ...
W0319 22:11:32.856] I0319 22:11:32.854690   46429 client.go:352] scheme "" not registered, fallback to default scheme
W0319 22:11:32.856] I0319 22:11:32.854727   46429 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0319 22:11:32.856] I0319 22:11:32.854774   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:11:32.856] I0319 22:11:32.855310   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:11:32.858] I0319 22:11:32.857812   46429 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0319 22:11:32.959] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0319 22:11:33.060] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0319 22:11:33.160] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0319 22:11:33.161] +++ exit code: 0
I0319 22:11:33.180] Recording: run_kubectl_run_tests
I0319 22:11:33.180] Running command: run_kubectl_run_tests
I0319 22:11:33.220] 
I0319 22:11:33.224] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 104 lines ...
I0319 22:11:36.512] Context "test" modified.
I0319 22:11:36.520] +++ [0319 22:11:36] Testing kubectl create filter
I0319 22:11:36.641] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:11:36.891] (Bpod/selector-test-pod created
I0319 22:11:37.051] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0319 22:11:37.167] (BSuccessful
I0319 22:11:37.168] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0319 22:11:37.168] has:pods "selector-test-pod-dont-apply" not found
I0319 22:11:37.273] pod "selector-test-pod" deleted
I0319 22:11:37.310] +++ exit code: 0
I0319 22:11:37.366] Recording: run_kubectl_apply_deployments_tests
I0319 22:11:37.367] Running command: run_kubectl_apply_deployments_tests
I0319 22:11:37.398] 
... skipping 33 lines ...
I0319 22:11:40.004] replicaset.extensions "my-depl-656cffcbcc" deleted
I0319 22:11:40.023] pod "my-depl-64775887d7-q6lhh" deleted
I0319 22:11:40.029] pod "my-depl-656cffcbcc-ftn7h" deleted
W0319 22:11:40.130] I0319 22:11:39.320052   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:40.130] I0319 22:11:39.320363   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:11:40.131] I0319 22:11:39.990003   46429 controller.go:606] quota admission added evaluator for: replicasets.extensions
W0319 22:11:40.131] E0319 22:11:40.028363   49333 replica_set.go:450] Sync "namespace-1553033497-13676/my-depl-656cffcbcc" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-656cffcbcc": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1553033497-13676/my-depl-656cffcbcc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f766f884-4a93-11e9-a65f-0242ac110002, UID in object meta: 
I0319 22:11:40.232] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:11:40.299] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:11:40.417] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:11:40.548] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:11:40.790] (Bdeployment.extensions/nginx created
W0319 22:11:40.891] I0319 22:11:40.320714   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:40.891] I0319 22:11:40.321001   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:11:40.892] I0319 22:11:40.796198   49333 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1553033497-13676", Name:"nginx", UID:"f8e3ec73-4a93-11e9-a65f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0319 22:11:40.892] I0319 22:11:40.804414   49333 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553033497-13676", Name:"nginx-776cc67f78", UID:"f8e4e69d-4a93-11e9-a65f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-dng68
W0319 22:11:40.893] I0319 22:11:40.809640   49333 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553033497-13676", Name:"nginx-776cc67f78", UID:"f8e4e69d-4a93-11e9-a65f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-7cmc7
W0319 22:11:40.893] I0319 22:11:40.812408   49333 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553033497-13676", Name:"nginx-776cc67f78", UID:"f8e4e69d-4a93-11e9-a65f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-x7lbn
I0319 22:11:40.993] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0319 22:11:45.314] (BSuccessful
I0319 22:11:45.315] message:Error from server (Conflict): error when applying patch:
I0319 22:11:45.315] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1553033497-13676\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0319 22:11:45.315] to:
I0319 22:11:45.316] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0319 22:11:45.316] Name: "nginx", Namespace: "namespace-1553033497-13676"
I0319 22:11:45.318] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1553033497-13676\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-19T22:11:40Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-19T22:11:40Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-19T22:11:40Z"]] "name":"nginx" "namespace":"namespace-1553033497-13676" "resourceVersion":"623" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1553033497-13676/deployments/nginx" "uid":"f8e3ec73-4a93-11e9-a65f-0242ac110002"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-19T22:11:40Z" "lastUpdateTime":"2019-03-19T22:11:40Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0319 22:11:45.318] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0319 22:11:45.318] has:Error from server (Conflict)
W0319 22:11:45.419] I0319 22:11:41.321395   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:45.419] I0319 22:11:41.321670   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:11:45.420] I0319 22:11:42.322050   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:45.420] I0319 22:11:42.322317   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:11:45.420] I0319 22:11:43.121399   49333 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1553033485-27540
W0319 22:11:45.420] I0319 22:11:43.322809   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
... skipping 207 lines ...
I0319 22:11:59.281] Context "test" modified.
I0319 22:11:59.293] +++ [0319 22:11:59] Testing kubectl get
W0319 22:11:59.394] I0319 22:11:59.334165   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:11:59.394] I0319 22:11:59.334428   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:11:59.495] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:11:59.514] (BSuccessful
I0319 22:11:59.515] message:Error from server (NotFound): pods "abc" not found
I0319 22:11:59.515] has:pods "abc" not found
I0319 22:11:59.635] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:11:59.746] (BSuccessful
I0319 22:11:59.746] message:Error from server (NotFound): pods "abc" not found
I0319 22:11:59.746] has:pods "abc" not found
I0319 22:11:59.858] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:11:59.967] (BSuccessful
I0319 22:11:59.968] message:{
I0319 22:11:59.968]     "apiVersion": "v1",
I0319 22:11:59.968]     "items": [],
... skipping 25 lines ...
I0319 22:12:00.500] has not:No resources found
I0319 22:12:00.516] Successful
I0319 22:12:00.517] message:NAME
I0319 22:12:00.517] has not:No resources found
I0319 22:12:00.634] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:12:00.758] (BSuccessful
I0319 22:12:00.759] message:error: the server doesn't have a resource type "foobar"
I0319 22:12:00.759] has not:No resources found
I0319 22:12:00.864] Successful
I0319 22:12:00.864] message:No resources found.
I0319 22:12:00.864] has:No resources found
I0319 22:12:00.968] Successful
I0319 22:12:00.969] message:
I0319 22:12:00.969] has not:No resources found
I0319 22:12:01.075] Successful
I0319 22:12:01.075] message:No resources found.
I0319 22:12:01.075] has:No resources found
I0319 22:12:01.199] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:12:01.311] (BSuccessful
I0319 22:12:01.312] message:Error from server (NotFound): pods "abc" not found
I0319 22:12:01.312] has:pods "abc" not found
I0319 22:12:01.315] FAIL!
I0319 22:12:01.315] message:Error from server (NotFound): pods "abc" not found
I0319 22:12:01.315] has not:List
I0319 22:12:01.316] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
W0319 22:12:01.416] I0319 22:12:01.335531   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:01.416] I0319 22:12:01.335778   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:01.517] Successful
I0319 22:12:01.517] message:I0319 22:12:01.398441   60006 loader.go:359] Config loaded from file /tmp/tmp.j5QZcjmnH3/.kube/config
... skipping 717 lines ...
I0319 22:12:05.484] }
I0319 22:12:05.507] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0319 22:12:05.810] (B<no value>Successful
I0319 22:12:05.810] message:valid-pod:
I0319 22:12:05.810] has:valid-pod:
I0319 22:12:05.915] Successful
I0319 22:12:05.916] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0319 22:12:05.916] 	template was:
I0319 22:12:05.916] 		{.missing}
I0319 22:12:05.916] 	object given to jsonpath engine was:
I0319 22:12:05.918] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-19T22:12:05Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-19T22:12:05Z"}}, "name":"valid-pod", "namespace":"namespace-1553033524-21884", "resourceVersion":"722", "selfLink":"/api/v1/namespaces/namespace-1553033524-21884/pods/valid-pod", "uid":"077c2a59-4a94-11e9-a65f-0242ac110002"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0319 22:12:05.918] has:missing is not found
W0319 22:12:06.019] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0319 22:12:06.119] Successful
I0319 22:12:06.120] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0319 22:12:06.120] 	template was:
I0319 22:12:06.120] 		{{.missing}}
I0319 22:12:06.120] 	raw data was:
I0319 22:12:06.121] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-19T22:12:05Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-19T22:12:05Z"}],"name":"valid-pod","namespace":"namespace-1553033524-21884","resourceVersion":"722","selfLink":"/api/v1/namespaces/namespace-1553033524-21884/pods/valid-pod","uid":"077c2a59-4a94-11e9-a65f-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0319 22:12:06.121] 	object given to template engine was:
I0319 22:12:06.123] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-19T22:12:05Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-19T22:12:05Z]] name:valid-pod namespace:namespace-1553033524-21884 resourceVersion:722 selfLink:/api/v1/namespaces/namespace-1553033524-21884/pods/valid-pod uid:077c2a59-4a94-11e9-a65f-0242ac110002] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 167 lines ...
I0319 22:12:09.454]   terminationGracePeriodSeconds: 30
I0319 22:12:09.454] status:
I0319 22:12:09.455]   phase: Pending
I0319 22:12:09.455]   qosClass: Guaranteed
I0319 22:12:09.455] has:name: valid-pod
I0319 22:12:09.468] Successful
I0319 22:12:09.469] message:Error from server (NotFound): pods "invalid-pod" not found
I0319 22:12:09.469] has:"invalid-pod" not found
I0319 22:12:09.564] pod "valid-pod" deleted
I0319 22:12:09.687] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:12:09.935] (Bpod/redis-master created
I0319 22:12:09.941] pod/valid-pod created
I0319 22:12:10.078] Successful
... skipping 293 lines ...
I0319 22:12:17.061] Running command: run_create_secret_tests
I0319 22:12:17.096] 
I0319 22:12:17.099] +++ Running case: test-cmd.run_create_secret_tests 
I0319 22:12:17.103] +++ working dir: /go/src/k8s.io/kubernetes
I0319 22:12:17.107] +++ command: run_create_secret_tests
I0319 22:12:17.224] Successful
I0319 22:12:17.225] message:Error from server (NotFound): secrets "mysecret" not found
I0319 22:12:17.225] has:secrets "mysecret" not found
W0319 22:12:17.346] I0319 22:12:17.346038   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:17.347] I0319 22:12:17.346329   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:17.448] Successful
I0319 22:12:17.448] message:Error from server (NotFound): secrets "mysecret" not found
I0319 22:12:17.449] has:secrets "mysecret" not found
I0319 22:12:17.449] Successful
I0319 22:12:17.449] message:user-specified
I0319 22:12:17.449] has:user-specified
I0319 22:12:17.519] Successful
I0319 22:12:17.611] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"0ed67cb3-4a94-11e9-a65f-0242ac110002","resourceVersion":"830","creationTimestamp":"2019-03-19T22:12:17Z"}}
... skipping 180 lines ...
I0319 22:12:22.452] has:Timeout exceeded while reading body
I0319 22:12:22.541] Successful
I0319 22:12:22.541] message:NAME        READY   STATUS    RESTARTS   AGE
I0319 22:12:22.541] valid-pod   0/1     Pending   0          2s
I0319 22:12:22.541] has:valid-pod
I0319 22:12:22.630] Successful
I0319 22:12:22.630] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0319 22:12:22.630] has:Invalid timeout value
I0319 22:12:22.729] pod "valid-pod" deleted
I0319 22:12:22.764] +++ exit code: 0
I0319 22:12:22.831] Recording: run_crd_tests
I0319 22:12:22.831] Running command: run_crd_tests
I0319 22:12:22.868] 
... skipping 240 lines ...
I0319 22:12:27.992] someField: field1
I0319 22:12:28.497] field1field1field1field1Successful
I0319 22:12:28.498] message:foo.company.com/test
I0319 22:12:28.498] has:foo.company.com/test
I0319 22:12:28.503] +++ [0319 22:12:28] Testing CustomResource patching
I0319 22:12:28.600] foo.company.com/test patched
W0319 22:12:28.701] E0319 22:12:27.068461   49333 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"]
W0319 22:12:28.702] I0319 22:12:27.353084   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:28.702] I0319 22:12:27.353410   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:12:28.702] I0319 22:12:27.678833   49333 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W0319 22:12:28.702] I0319 22:12:27.679973   46429 client.go:352] parsed scheme: ""
W0319 22:12:28.702] I0319 22:12:27.680012   46429 client.go:352] scheme "" not registered, fallback to default scheme
W0319 22:12:28.702] I0319 22:12:27.680055   46429 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
... skipping 4 lines ...
W0319 22:12:28.703] I0319 22:12:28.354433   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:28.804] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0319 22:12:28.843] (Bfoo.company.com/test patched
I0319 22:12:28.971] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0319 22:12:29.078] (Bfoo.company.com/test patched
I0319 22:12:29.205] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0319 22:12:29.408] (B+++ [0319 22:12:29] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0319 22:12:29.485] {
I0319 22:12:29.485]     "apiVersion": "company.com/v1",
I0319 22:12:29.485]     "kind": "Foo",
I0319 22:12:29.485]     "metadata": {
I0319 22:12:29.485]         "annotations": {
I0319 22:12:29.486]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 352 lines ...
W0319 22:12:44.366] I0319 22:12:44.365414   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:44.366] I0319 22:12:44.365667   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:45.334] crd.sh:459: Successful get bars {{len .items}}: 0
I0319 22:12:45.535] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0319 22:12:45.636] I0319 22:12:45.365965   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:45.636] I0319 22:12:45.366208   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:12:45.636] Error from server (NotFound): namespaces "non-native-resources" not found
I0319 22:12:45.737] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0319 22:12:45.787] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0319 22:12:45.911] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0319 22:12:45.959] +++ exit code: 0
I0319 22:12:46.017] Recording: run_cmd_with_img_tests
I0319 22:12:46.018] Running command: run_cmd_with_img_tests
... skipping 12 lines ...
W0319 22:12:46.381] I0319 22:12:46.381222   49333 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553033566-27000", Name:"test1-848d5d4b47", UID:"1ffb997d-4a94-11e9-a65f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-qbvp8
I0319 22:12:46.482] Successful
I0319 22:12:46.482] message:deployment.apps/test1 created
I0319 22:12:46.482] has:deployment.apps/test1 created
I0319 22:12:46.489] deployment.extensions "test1" deleted
I0319 22:12:46.585] Successful
I0319 22:12:46.586] message:error: Invalid image name "InvalidImageName": invalid reference format
I0319 22:12:46.586] has:error: Invalid image name "InvalidImageName": invalid reference format
I0319 22:12:46.611] +++ exit code: 0
I0319 22:12:46.691] +++ [0319 22:12:46] Testing recursive resources
I0319 22:12:46.700] +++ [0319 22:12:46] Creating namespace namespace-1553033566-29456
I0319 22:12:46.793] namespace/namespace-1553033566-29456 created
I0319 22:12:46.884] Context "test" modified.
I0319 22:12:47.006] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:12:47.386] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:47.390] (BSuccessful
I0319 22:12:47.390] message:pod/busybox0 created
I0319 22:12:47.390] pod/busybox1 created
I0319 22:12:47.390] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0319 22:12:47.390] has:error validating data: kind not set
W0319 22:12:47.491] I0319 22:12:47.366909   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:47.491] I0319 22:12:47.367075   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:47.592] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:47.730] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0319 22:12:47.734] (BSuccessful
I0319 22:12:47.735] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0319 22:12:47.735] has:Object 'Kind' is missing
I0319 22:12:47.852] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:48.217] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0319 22:12:48.221] (BSuccessful
I0319 22:12:48.221] message:pod/busybox0 replaced
I0319 22:12:48.221] pod/busybox1 replaced
I0319 22:12:48.222] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0319 22:12:48.222] has:error validating data: kind not set
I0319 22:12:48.335] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:48.457] (BSuccessful
I0319 22:12:48.457] message:Name:               busybox0
I0319 22:12:48.457] Namespace:          namespace-1553033566-29456
I0319 22:12:48.457] Priority:           0
I0319 22:12:48.457] PriorityClassName:  <none>
... skipping 161 lines ...
W0319 22:12:48.576] I0319 22:12:48.367705   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:48.677] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:48.827] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0319 22:12:48.830] (BSuccessful
I0319 22:12:48.831] message:pod/busybox0 annotated
I0319 22:12:48.831] pod/busybox1 annotated
I0319 22:12:48.831] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0319 22:12:48.831] has:Object 'Kind' is missing
I0319 22:12:48.943] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:49.320] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0319 22:12:49.323] (BSuccessful
I0319 22:12:49.324] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0319 22:12:49.324] pod/busybox0 configured
I0319 22:12:49.324] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0319 22:12:49.324] pod/busybox1 configured
I0319 22:12:49.324] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0319 22:12:49.324] has:error validating data: kind not set
W0319 22:12:49.425] I0319 22:12:49.368078   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:49.425] I0319 22:12:49.368448   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:49.526] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:12:49.664] (Bdeployment.apps/nginx created
W0319 22:12:49.765] I0319 22:12:49.671009   49333 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1553033566-29456", Name:"nginx", UID:"21f195c0-4a94-11e9-a65f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5f7cff5b56 to 3
W0319 22:12:49.766] I0319 22:12:49.675499   49333 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1553033566-29456", Name:"nginx-5f7cff5b56", UID:"21f280aa-4a94-11e9-a65f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1008", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-5gsj6
... skipping 53 lines ...
W0319 22:12:50.369] I0319 22:12:50.369034   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:50.470] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:50.576] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:50.579] (BSuccessful
I0319 22:12:50.580] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0319 22:12:50.580] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0319 22:12:50.581] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0319 22:12:50.581] has:Object 'Kind' is missing
I0319 22:12:50.694] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:50.804] (BSuccessful
I0319 22:12:50.804] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0319 22:12:50.804] has:busybox0:busybox1:
I0319 22:12:50.808] Successful
I0319 22:12:50.808] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0319 22:12:50.808] has:Object 'Kind' is missing
I0319 22:12:50.921] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:51.044] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0319 22:12:51.165] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0319 22:12:51.168] (BSuccessful
I0319 22:12:51.169] message:pod/busybox0 labeled
I0319 22:12:51.169] pod/busybox1 labeled
I0319 22:12:51.169] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0319 22:12:51.169] has:Object 'Kind' is missing
I0319 22:12:51.280] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:51.395] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0319 22:12:51.495] I0319 22:12:51.369430   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:51.496] I0319 22:12:51.369664   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:51.596] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0319 22:12:51.596] (BSuccessful
I0319 22:12:51.597] message:pod/busybox0 patched
I0319 22:12:51.597] pod/busybox1 patched
I0319 22:12:51.597] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0319 22:12:51.597] has:Object 'Kind' is missing
I0319 22:12:51.640] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:51.865] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:12:51.869] (BSuccessful
I0319 22:12:51.869] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0319 22:12:51.869] pod "busybox0" force deleted
I0319 22:12:51.870] pod "busybox1" force deleted
I0319 22:12:51.870] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0319 22:12:51.870] has:Object 'Kind' is missing
I0319 22:12:51.984] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0319 22:12:52.212] (Breplicationcontroller/busybox0 created
I0319 22:12:52.217] replicationcontroller/busybox1 created
W0319 22:12:52.318] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0319 22:12:52.319] I0319 22:12:52.217668   49333 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1553033566-29456", Name:"busybox0", UID:"23765fd3-4a94-11e9-a65f-0242ac110002", APIVersion:"v1", ResourceVersion:"1038", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-6dkrc
W0319 22:12:52.319] I0319 22:12:52.222105   49333 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1553033566-29456", Name:"busybox1", UID:"23775203-4a94-11e9-a65f-0242ac110002", APIVersion:"v1", ResourceVersion:"1040", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qhn65
W0319 22:12:52.370] I0319 22:12:52.370003   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:52.371] I0319 22:12:52.370348   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:52.471] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:52.472] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:52.582] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0319 22:12:52.701] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0319 22:12:52.933] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0319 22:12:53.050] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0319 22:12:53.054] (BSuccessful
I0319 22:12:53.055] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0319 22:12:53.055] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0319 22:12:53.055] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0319 22:12:53.055] has:Object 'Kind' is missing
I0319 22:12:53.146] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0319 22:12:53.249] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0319 22:12:53.377] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:53.496] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0319 22:12:53.622] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0319 22:12:53.856] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0319 22:12:53.979] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0319 22:12:53.984] (BSuccessful
I0319 22:12:53.984] message:service/busybox0 exposed
I0319 22:12:53.984] service/busybox1 exposed
I0319 22:12:53.984] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0319 22:12:53.985] has:Object 'Kind' is missing
W0319 22:12:54.085] I0319 22:12:53.370694   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:54.086] I0319 22:12:53.370921   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0319 22:12:54.186] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:54.230] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0319 22:12:54.358] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0319 22:12:54.645] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0319 22:12:54.772] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0319 22:12:54.775] (BSuccessful
I0319 22:12:54.776] message:replicationcontroller/busybox0 scaled
I0319 22:12:54.776] replicationcontroller/busybox1 scaled
I0319 22:12:54.776] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0319 22:12:54.776] has:Object 'Kind' is missing
W0319 22:12:54.877] I0319 22:12:54.371357   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0319 22:12:54.878] I0319 22:12:54.371681   46429 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0319 22:12:54.878] I0319 22:12:54.491105   49333 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1553033566-29456", Name:"busybox0", UID:"23765fd3-4a94-11e9-a65f-0242ac110002", APIVersion:"v1", ResourceVersion:"1059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-262v5
W0319 22:12:54.878] I0319 22:12:54.509197   49333 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1553033566-29456", Name:"busybox1", UID:"23775203-4a94-11e9-a65f-0242ac110002", APIVersion:"v1", ResourceVersion:"1063", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-wn9x9
I0319 22:12:54.979] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0319 22:12:55.193] (BWaiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: , got: busybox0-262v5:busybox0-6dkrc:busybox1-qhn65:busybox1-wn9x9:
I0319 22:12:55.195] 
I0319 22:12:55.202] generic-resources.sh:381: FAIL!
I0319 22:12:55.202] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I0319 22:12:55.202]   Expected: 
I0319 22:12:55.202]   Got:      busybox0-262v5:busybox0-6dkrc:busybox1-qhn65:busybox1-wn9x9:
I0319 22:12:55.202] (B
I0319 22:12:55.202] 51 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I0319 22:12:55.202] (B
... skipping 6 lines ...
W0319 22:12:55.304] I0319 22:12:55.253432   46429 crdregistration_controller.go:143] Shutting down crd-autoregister controller
W0319 22:12:55.304] I0319 22:12:55.254226   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.305] I0319 22:12:55.254279   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.305] I0319 22:12:55.253410   46429 crd_finalizer.go:254] Shutting down CRDFinalizer
W0319 22:12:55.305] I0319 22:12:55.254527   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.305] I0319 22:12:55.254554   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.305] W0319 22:12:55.254502   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.305] I0319 22:12:55.253442   46429 naming_controller.go:295] Shutting down NamingConditionController
W0319 22:12:55.305] I0319 22:12:55.253467   46429 available_controller.go:332] Shutting down AvailableConditionController
W0319 22:12:55.306] I0319 22:12:55.253467   46429 autoregister_controller.go:163] Shutting down autoregister controller
W0319 22:12:55.306] I0319 22:12:55.253485   46429 customresource_discovery_controller.go:219] Shutting down DiscoveryController
W0319 22:12:55.306] I0319 22:12:55.253502   46429 controller.go:87] Shutting down OpenAPI AggregationController
W0319 22:12:55.306] I0319 22:12:55.253566   46429 controller.go:176] Shutting down kubernetes service endpoint reconciler
... skipping 38 lines ...
W0319 22:12:55.311] I0319 22:12:55.255924   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.311] I0319 22:12:55.255950   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.311] I0319 22:12:55.255958   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.311] I0319 22:12:55.256075   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.311] I0319 22:12:55.256089   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.311] I0319 22:12:55.256109   46429 secure_serving.go:160] Stopped listening on 127.0.0.1:6443
W0319 22:12:55.312] W0319 22:12:55.256236   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.312] W0319 22:12:55.256271   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.312] W0319 22:12:55.256358   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.312] I0319 22:12:55.256469   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.312] I0319 22:12:55.256497   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.313] W0319 22:12:55.256546   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.313] W0319 22:12:55.256628   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.313] I0319 22:12:55.256700   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.313] W0319 22:12:55.256720   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.313] W0319 22:12:55.256478   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.313] W0319 22:12:55.256787   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.314] I0319 22:12:55.256863   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.314] I0319 22:12:55.256889   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.314] I0319 22:12:55.256723   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.314] W0319 22:12:55.256403   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.314] I0319 22:12:55.257145   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.314] I0319 22:12:55.257209   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.315] W0319 22:12:55.257292   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.315] I0319 22:12:55.257313   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.315] I0319 22:12:55.257400   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.315] I0319 22:12:55.257460   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.315] W0319 22:12:55.257145   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.315] W0319 22:12:55.256793   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.316] W0319 22:12:55.257494   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.316] I0319 22:12:55.257401   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.316] W0319 22:12:55.257367   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.316] W0319 22:12:55.257332   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.316] W0319 22:12:55.257611   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.317] I0319 22:12:55.257859   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.317] I0319 22:12:55.257884   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.317] I0319 22:12:55.257910   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.317] I0319 22:12:55.257917   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.317] I0319 22:12:55.257976   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.317] I0319 22:12:55.257991   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 4 lines ...
W0319 22:12:55.318] I0319 22:12:55.258049   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.318] I0319 22:12:55.258055   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.318] I0319 22:12:55.258070   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.318] I0319 22:12:55.258079   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.318] I0319 22:12:55.258105   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.318] I0319 22:12:55.258124   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.319] W0319 22:12:55.258339   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.319] I0319 22:12:55.258664   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.319] I0319 22:12:55.258686   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.319] I0319 22:12:55.258705   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.319] I0319 22:12:55.258710   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.319] I0319 22:12:55.258726   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.319] I0319 22:12:55.258735   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.320] I0319 22:12:55.258752   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.320] I0319 22:12:55.258757   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.320] W0319 22:12:55.258772   46429 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0319 22:12:55.320] I0319 22:12:55.258829   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
W0319 22:12:55.320] I0319 22:12:55.258845   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.320] I0319 22:12:55.257367   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.320] I0319 22:12:55.258999   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.321] I0319 22:12:55.258999   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.321] I0319 22:12:55.259036   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.321] I0319 22:12:55.259075   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.321] E0319 22:12:55.259088   46429 controller.go:179] rpc error: code = Unavailable desc = transport is closing
W0319 22:12:55.321] I0319 22:12:55.259095   46429 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0319 22:12:55.394] make: *** [test-cmd] Error 1
I0319 22:12:55.495] junit report dir: /workspace/artifacts
I0319 22:12:55.495] +++ [0319 22:12:55] Clean up complete
I0319 22:12:55.495] Makefile:298: recipe for target 'test-cmd' failed
W0319 22:12:58.207] Traceback (most recent call last):
W0319 22:12:58.207]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0319 22:12:58.207]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0319 22:12:58.207]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0319 22:12:58.207]     check(*cmd)
W0319 22:12:58.208]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0319 22:12:58.208]     subprocess.check_call(cmd)
W0319 22:12:58.208]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0319 22:12:58.208]     raise CalledProcessError(retcode, cmd)
W0319 22:12:58.209] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.14-v20190318-2ac98e338', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0319 22:12:58.215] Command failed
I0319 22:12:58.215] process 490 exited with code 1 after 12.1m
E0319 22:12:58.216] FAIL: ci-kubernetes-integration-master
I0319 22:12:58.216] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0319 22:12:58.746] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0319 22:12:58.809] process 66862 exited with code 0 after 0.0m
I0319 22:12:58.810] Call:  gcloud config get-value account
I0319 22:12:59.157] process 66874 exited with code 0 after 0.0m
I0319 22:12:59.158] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0319 22:12:59.158] Upload result and artifacts...
I0319 22:12:59.158] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/9600
I0319 22:12:59.158] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/9600/artifacts
W0319 22:13:00.386] CommandException: One or more URLs matched no objects.
E0319 22:13:00.548] Command failed
I0319 22:13:00.548] process 66886 exited with code 1 after 0.0m
W0319 22:13:00.549] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/9600/artifacts not exist yet
I0319 22:13:00.549] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/9600/artifacts
I0319 22:13:02.623] process 67028 exited with code 0 after 0.0m
W0319 22:13:02.623] metadata path /workspace/_artifacts/metadata.json does not exist
W0319 22:13:02.624] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...