This job view page is being replaced by Spyglass soon. Check out the new job view.
PRixdy: Add empty file to trigger PR testing
ResultFAILURE
Tests 1 failed / 61 succeeded
Started2019-03-15 18:55
Elapsed11m49s
Revisione88187e1f921b0cc21de13ca924a881743b07b59
Refs 46662

Test Failures


Gubernator Internal Fatal XML Parse Error 0.00s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Gubernator\sInternal\sFatal\sXML\sParse\sError$'
not well-formed (invalid token): line 74, column 12
				from junit_test-cmd.xml

Filter through log files | View test history on testgrid


Show 61 Passed Tests

Error lines from build-log.txt

... skipping 390 lines ...
I0315 19:03:07.729254   47212 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0315 19:03:07.729351   47212 server.go:559] external host was not specified, using 10.61.18.212
W0315 19:03:07.729371   47212 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
I0315 19:03:07.729647   47212 server.go:146] Version: v1.15.0-alpha.0.1232+33b213cda30453
I0315 19:03:08.116893   47212 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
I0315 19:03:08.116927   47212 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
E0315 19:03:08.117647   47212 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:08.117699   47212 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:08.117773   47212 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:08.117837   47212 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:08.117881   47212 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:08.117911   47212 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0315 19:03:08.117946   47212 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
I0315 19:03:08.117961   47212 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
I0315 19:03:08.119530   47212 client.go:352] parsed scheme: ""
I0315 19:03:08.119560   47212 client.go:352] scheme "" not registered, fallback to default scheme
I0315 19:03:08.119622   47212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 19:03:08.119686   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 366 lines ...
I0315 19:03:09.115306   47212 client.go:352] scheme "" not registered, fallback to default scheme
I0315 19:03:09.115347   47212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 19:03:09.115390   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:03:09.115966   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
+ out=
+ sleep 1
E0315 19:03:09.471514   47212 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:09.471564   47212 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:09.471623   47212 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:09.471690   47212 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:09.471706   47212 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0315 19:03:09.471731   47212 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0315 19:03:09.471779   47212 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
I0315 19:03:09.471785   47212 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
I0315 19:03:09.473150   47212 client.go:352] parsed scheme: ""
I0315 19:03:09.473175   47212 client.go:352] scheme "" not registered, fallback to default scheme
I0315 19:03:09.473227   47212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 19:03:09.473295   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 284 lines ...
++ curl --max-time 1 -gkfs http://127.0.0.1:10252/healthz
+ out=
+ sleep 1
I0315 19:03:45.900485   50145 serving.go:319] Generated self-signed cert in-memory
I0315 19:03:46.024378   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:03:46.024611   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 19:03:46.266883   50145 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0315 19:03:46.266927   50145 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0315 19:03:46.266933   50145 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0315 19:03:46.266948   50145 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0315 19:03:46.266962   50145 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0315 19:03:46.266998   50145 controllermanager.go:155] Version: v1.15.0-alpha.0.1232+33b213cda30453
I0315 19:03:46.267587   50145 secure_serving.go:116] Serving securely on [::]:10257
I0315 19:03:46.268094   50145 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252
I0315 19:03:46.268299   50145 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-controller-manager...
I0315 19:03:46.280192   50145 leaderelection.go:227] successfully acquired lease kube-system/kube-controller-manager
... skipping 15 lines ...
I0315 19:03:46.489460   50145 controllermanager.go:497] Started "cronjob"
I0315 19:03:46.490351   50145 controllermanager.go:497] Started "csrcleaner"
W0315 19:03:46.490555   50145 controllermanager.go:489] Skipping "nodeipam"
I0315 19:03:46.491411   50145 node_lifecycle_controller.go:77] Sending events to api server
I0315 19:03:46.491468   50145 cleaner.go:81] Starting CSR cleaner controller
I0315 19:03:46.491470   50145 cronjob_controller.go:94] Starting CronJob Manager
E0315 19:03:46.492244   50145 core.go:161] failed to start cloud node lifecycle controller: no cloud provider provided
W0315 19:03:46.492285   50145 controllermanager.go:489] Skipping "cloud-node-lifecycle"
I0315 19:03:46.493056   50145 controllermanager.go:497] Started "replicationcontroller"
I0315 19:03:46.493084   50145 replica_set.go:182] Starting replicationcontroller controller
I0315 19:03:46.493339   50145 controller_utils.go:1027] Waiting for caches to sync for ReplicationController controller
I0315 19:03:46.501696   50145 controllermanager.go:497] Started "namespace"
I0315 19:03:46.501784   50145 namespace_controller.go:186] Starting namespace controller
... skipping 46 lines ...
I0315 19:03:46.718551   50145 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0315 19:03:46.718613   50145 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0315 19:03:46.718654   50145 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0315 19:03:46.718689   50145 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0315 19:03:46.718786   50145 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0315 19:03:46.718828   50145 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
E0315 19:03:46.718871   50145 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0315 19:03:46.718904   50145 controllermanager.go:497] Started "resourcequota"
I0315 19:03:46.718956   50145 resource_quota_controller.go:276] Starting resource quota controller
I0315 19:03:46.719005   50145 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
I0315 19:03:46.719067   50145 resource_quota_monitor.go:301] QuotaMonitor running
I0315 19:03:47.024768   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:03:47.024899   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
... skipping 8 lines ...
I0315 19:03:47.130895   50145 deployment_controller.go:152] Starting deployment controller
I0315 19:03:47.131089   50145 controller_utils.go:1027] Waiting for caches to sync for deployment controller
I0315 19:03:47.132070   50145 controllermanager.go:497] Started "replicaset"
W0315 19:03:47.132091   50145 controllermanager.go:476] "tokencleaner" is disabled
I0315 19:03:47.132240   50145 replica_set.go:182] Starting replicaset controller
I0315 19:03:47.132269   50145 controller_utils.go:1027] Waiting for caches to sync for ReplicaSet controller
E0315 19:03:47.135192   50145 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0315 19:03:47.135305   50145 controllermanager.go:489] Skipping "service"
I0315 19:03:47.135864   50145 controllermanager.go:497] Started "podgc"
I0315 19:03:47.136778   50145 controllermanager.go:497] Started "serviceaccount"
I0315 19:03:47.139170   50145 controllermanager.go:497] Started "daemonset"
I0315 19:03:47.140319   50145 serviceaccounts_controller.go:115] Starting service account controller
I0315 19:03:47.140356   50145 controller_utils.go:1027] Waiting for caches to sync for service account controller
... skipping 34 lines ...
I0315 19:03:47.242487   50145 controller_utils.go:1034] Caches are synced for PV protection controller
I0315 19:03:47.242615   50145 controller_utils.go:1034] Caches are synced for GC controller
I0315 19:03:47.243241   50145 controller_utils.go:1034] Caches are synced for certificate controller
I0315 19:03:47.243635   50145 controller_utils.go:1034] Caches are synced for TTL controller
I0315 19:03:47.244182   50145 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
I0315 19:03:47.301985   50145 controller_utils.go:1034] Caches are synced for namespace controller
E0315 19:03:47.321495   50145 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0315 19:03:47.322615   50145 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0315 19:03:47.434938   50145 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0315 19:03:47.435067   50145 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0315 19:03:47.504948   50145 controller_utils.go:1034] Caches are synced for persistent volume controller
I0315 19:03:47.505498   50145 controller_utils.go:1034] Caches are synced for expand controller
I0315 19:03:47.541092   50145 controller_utils.go:1034] Caches are synced for PVC protection controller
I0315 19:03:47.545896   50145 controller_utils.go:1034] Caches are synced for attach detach controller
I0315 19:03:47.693683   50145 controller_utils.go:1034] Caches are synced for ReplicationController controller
I0315 19:03:47.703415   50145 controller_utils.go:1034] Caches are synced for disruption controller
... skipping 7 lines ...
I0315 19:03:48.025240   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:03:48.027462   50145 controller_utils.go:1034] Caches are synced for garbage collector controller
I0315 19:03:48.027487   50145 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0315 19:03:48.040935   50145 controller_utils.go:1034] Caches are synced for daemon sets controller
I0315 19:03:48.087676   50145 controller_utils.go:1034] Caches are synced for taint controller
I0315 19:03:48.087784   50145 taint_manager.go:198] Starting NoExecuteTaintManager
E0315 19:03:48.417485   50145 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0315 19:03:48.621175   50145 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
I0315 19:03:48.721688   50145 controller_utils.go:1034] Caches are synced for garbage collector controller
node/127.0.0.1 created
W0315 19:03:48.886758   50145 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+ SUPPORTED_RESOURCES=("*")
+ runTests
+ foundError=
+ '[' -z '*' ']'
+ kube::log::status 'Checking kubectl version'
+ local V=0
... skipping 252 lines ...
++ sort /tmp/tmp.wH96EXXO7X/server_version_test
++ sort /tmp/tmp.wH96EXXO7X/server_client_only_version_test
++ '[' ne == eq ']'
+++ diff -iwB /tmp/tmp.wH96EXXO7X/server_version_test.sorted /tmp/tmp.wH96EXXO7X/server_client_only_version_test.sorted
++ '[' '!' -z '' ']'
++ echo ''
++ echo 'FAIL! the flag '\''--client'\'' correctly has no server version info'

++ echo '  Expected: '
FAIL! the flag '--client' correctly has no server version info
  Expected: 
+++ cat /tmp/tmp.wH96EXXO7X/server_version_test
Major=1
Minor=15+
GitVersion=v1.15.0-alpha.0.1232+33b213cda30453
GitCommit=33b213cda304533e7ffa2acf7a68dae4b4c0fc0e
... skipping 84 lines ...
|   "goVersion": "go1.12.1",
|   "compiler": "gc",
|   "platform": "linux/amd64"
| }+++ [0315 19:03:50] Testing kubectl version: check client only output matches expected output
| Successful: the flag '\''--client'\'' shows correct client info
| (B
| FAIL! the flag '\''--client'\'' correctly has no server version info
|   Expected: 
| Major=1
| Minor=15+
| GitVersion=v1.15.0-alpha.0.1232+33b213cda30453
| GitCommit=33b213cda304533e7ffa2acf7a68dae4b4c0fc0e
| GitTreeState=clean
... skipping 14 lines ...
| (B
| 42 /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/version.sh
| (B
+++ exit code: 1'
+ '[' 1 = 0 -a -n '' ']'
+ [[ 1 != 0 ]]
+ echo '+++ error: 1'
+ tee -a /var/tmp/ju23812.txt
+++ error: 1
+ rm -f /var/tmp/ju23812.txt
++ cat /var/tmp/ju23812-err.txt
+ errMsg='+ eVal run_kubectl_version_tests
+ tee -a /var/tmp/ju23812.txt
+ eval run_kubectl_version_tests
++ run_kubectl_version_tests
... skipping 75 lines ...
++ sort /tmp/tmp.wH96EXXO7X/server_version_test
++ sort /tmp/tmp.wH96EXXO7X/server_client_only_version_test
++ '\''['\'' ne == eq '\'']'\''
+++ diff -iwB /tmp/tmp.wH96EXXO7X/server_version_test.sorted /tmp/tmp.wH96EXXO7X/server_client_only_version_test.sorted
++ '\''['\'' '\''!'\'' -z '\'''\'' '\'']'\''
++ echo '\'''\''
++ echo '\''FAIL! the flag '\''\'\'''\''--client'\''\'\'''\'' correctly has no server version info'\''
++ echo '\''  Expected: '\''
+++ cat /tmp/tmp.wH96EXXO7X/server_version_test
++ echo '\''Major=1
Minor=15+
GitVersion=v1.15.0-alpha.0.1232+33b213cda30453
GitCommit=33b213cda304533e7ffa2acf7a68dae4b4c0fc0e
... skipping 43 lines ...
+ time=0.595284
++ echo '0 0.595284'
++ awk '{print $1 + $2}'
+ total=0.595284
+ [[ 1 = 0 ]]
+ failure='
      <failure type="ScriptError" message="Script Error"><![CDATA[+ eVal run_kubectl_version_tests
+ tee -a /var/tmp/ju23812.txt
+ eval run_kubectl_version_tests
++ run_kubectl_version_tests
++ set -o nounset
++ set -o errexit
++ kube::log::status '\''Testing kubectl version'\''
... skipping 72 lines ...
++ sort /tmp/tmp.wH96EXXO7X/server_version_test
++ sort /tmp/tmp.wH96EXXO7X/server_client_only_version_test
++ '\''['\'' ne == eq '\'']'\''
+++ diff -iwB /tmp/tmp.wH96EXXO7X/server_version_test.sorted /tmp/tmp.wH96EXXO7X/server_client_only_version_test.sorted
++ '\''['\'' '\''!'\'' -z '\'''\'' '\'']'\''
++ echo '\'''\''
++ echo '\''FAIL! the flag '\''\'\'''\''--client'\''\'\'''\'' correctly has no server version info'\''
++ echo '\''  Expected: '\''
+++ cat /tmp/tmp.wH96EXXO7X/server_version_test
++ echo '\''Major=1
Minor=15+
GitVersion=v1.15.0-alpha.0.1232+33b213cda30453
GitCommit=33b213cda304533e7ffa2acf7a68dae4b4c0fc0e
... skipping 36 lines ...
+ echo 1
+ tr -d '\''\n'\'']]></failure>
  '
+ content='
    <testcase assertions="1" name="run_kubectl_version_tests" time="0.595284" classname="test-cmd">
    
      <failure type="ScriptError" message="Script Error"><![CDATA[+ eVal run_kubectl_version_tests
+ tee -a /var/tmp/ju23812.txt
+ eval run_kubectl_version_tests
++ run_kubectl_version_tests
++ set -o nounset
++ set -o errexit
++ kube::log::status '\''Testing kubectl version'\''
... skipping 72 lines ...
++ sort /tmp/tmp.wH96EXXO7X/server_version_test
++ sort /tmp/tmp.wH96EXXO7X/server_client_only_version_test
++ '\''['\'' ne == eq '\'']'\''
+++ diff -iwB /tmp/tmp.wH96EXXO7X/server_version_test.sorted /tmp/tmp.wH96EXXO7X/server_client_only_version_test.sorted
++ '\''['\'' '\''!'\'' -z '\'''\'' '\'']'\''
++ echo '\'''\''
++ echo '\''FAIL! the flag '\''\'\'''\''--client'\''\'\'''\'' correctly has no server version info'\''
++ echo '\''  Expected: '\''
+++ cat /tmp/tmp.wH96EXXO7X/server_version_test
++ echo '\''Major=1
Minor=15+
GitVersion=v1.15.0-alpha.0.1232+33b213cda30453
GitCommit=33b213cda304533e7ffa2acf7a68dae4b4c0fc0e
... skipping 118 lines ...
++ sort /tmp/tmp.wH96EXXO7X/server_version_test
++ sort /tmp/tmp.wH96EXXO7X/server_client_only_version_test
++ '\''['\'' ne == eq '\'']'\''
+++ diff -iwB /tmp/tmp.wH96EXXO7X/server_version_test.sorted /tmp/tmp.wH96EXXO7X/server_client_only_version_test.sorted
++ '\''['\'' '\''!'\'' -z '\'''\'' '\'']'\''
++ echo '\'''\''
++ echo '\''FAIL! the flag '\''\'\'''\''--client'\''\'\'''\'' correctly has no server version info'\''
++ echo '\''  Expected: '\''
+++ cat /tmp/tmp.wH96EXXO7X/server_version_test
++ echo '\''Major=1
Minor=15+
GitVersion=v1.15.0-alpha.0.1232+33b213cda30453
GitCommit=33b213cda304533e7ffa2acf7a68dae4b4c0fc0e
... skipping 43 lines ...
++ kube::log::errexit
++ local err=1
++ set +o
++ grep -qe '-o errexit'
++ return
+ [[ 1 -ne 0 ]]
+ echo 'Error when running run_kubectl_version_tests'
Error when running run_kubectl_version_tests
+ foundError='run_kubectl_version_tests, '
+ set -o nounset
+ set -o errexit
+ record_command run_kubectl_config_set_tests
+ set +o nounset
+ set +o errexit
... skipping 2209 lines ...
++ [[ 1 < 0 ]]
+++ date '+[%m%d %H:%M:%S]'
++ timestamp='[0315 19:03:54]'
+++ [0315 19:03:54] Testing RESTMapper
++ echo '+++ [0315 19:03:54] Testing RESTMapper'
++ shift
++ RESTMAPPER_ERROR_FILE=/tmp/tmp.wH96EXXO7X/restmapper-error
++ kubectl get -s http://127.0.0.1:8080 --match-server-version unknownresourcetype
I0315 19:03:55.028874   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:03:55.029042   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:03:56.029444   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:03:56.029680   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:03:57.030091   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:03:57.030422   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
++ true
++ grep -q 'the server doesn'\''t have a resource type' /tmp/tmp.wH96EXXO7X/restmapper-error
+++ cat /tmp/tmp.wH96EXXO7X/restmapper-error
++ kube::log::status '"kubectl get unknownresourcetype" returns error as expected: error: the server doesn'\''t have a resource type "unknownresourcetype"'
++ local V=0
++ [[ 1 < 0 ]]
+++ date '+[%m%d %H:%M:%S]'
++ timestamp='[0315 19:03:57]'
++ echo '+++ [0315 19:03:57] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn'\''t have a resource type "unknownresourcetype"'
+++ [0315 19:03:57] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
++ shift
++ rm /tmp/tmp.wH96EXXO7X/restmapper-error
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\n'
++ cat /tmp/evErr.23812.log
+ evErr=0
... skipping 12 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0315 19:03:54] Creating namespace namespace-1552676634-8506
| namespace/namespace-1552676634-8506 created
| Context "test" modified.
+++ [0315 19:03:54] Testing RESTMapper
+++ [0315 19:03:57] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn'\''t have a resource type "unknownresourcetype"
+++ exit code: 0'
+ '[' 0 = 0 -a -n '' ']'
+ [[ 0 != 0 ]]
+ rm -f /var/tmp/ju23812.txt
++ cat /var/tmp/ju23812-err.txt
+ errMsg='+ eVal run_RESTMapper_evaluation_tests
... skipping 19 lines ...
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:03:54]'\''
++ echo '\''+++ [0315 19:03:54] Testing RESTMapper'\''
++ shift
++ RESTMAPPER_ERROR_FILE=/tmp/tmp.wH96EXXO7X/restmapper-error
++ kubectl get -s http://127.0.0.1:8080 --match-server-version unknownresourcetype
++ true
++ grep -q '\''the server doesn'\''\'\'''\''t have a resource type'\'' /tmp/tmp.wH96EXXO7X/restmapper-error
+++ cat /tmp/tmp.wH96EXXO7X/restmapper-error
++ kube::log::status '\''"kubectl get unknownresourcetype" returns error as expected: error: the server doesn'\''\'\'''\''t have a resource type "unknownresourcetype"'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:03:57]'\''
++ echo '\''+++ [0315 19:03:57] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn'\''\'\'''\''t have a resource type "unknownresourcetype"'\''
++ shift
++ rm /tmp/tmp.wH96EXXO7X/restmapper-error
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\''\n'\'''
+ rm -f /var/tmp/ju23812-err.txt
+ asserts=1
... skipping 32 lines ...
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:03:54]'\''
++ echo '\''+++ [0315 19:03:54] Testing RESTMapper'\''
++ shift
++ RESTMAPPER_ERROR_FILE=/tmp/tmp.wH96EXXO7X/restmapper-error
++ kubectl get -s http://127.0.0.1:8080 --match-server-version unknownresourcetype
++ true
++ grep -q '\''the server doesn'\''\'\'''\''t have a resource type'\'' /tmp/tmp.wH96EXXO7X/restmapper-error
+++ cat /tmp/tmp.wH96EXXO7X/restmapper-error
++ kube::log::status '\''"kubectl get unknownresourcetype" returns error as expected: error: the server doesn'\''\'\'''\''t have a resource type "unknownresourcetype"'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:03:57]'\''
++ echo '\''+++ [0315 19:03:57] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn'\''\'\'''\''t have a resource type "unknownresourcetype"'\''
++ shift
++ rm /tmp/tmp.wH96EXXO7X/restmapper-error
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\''\n'\'']]></system-err>
    </testcase>
  '
... skipping 8053 lines ...
+++ echo core.sh:186
++ echo 'core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'
core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B++ echo -n '(B'
++ return 0
++ kubectl delete pods -s http://127.0.0.1:8080 --match-server-version
error: resource(s) were provided, but no name, label selector, or --all flag specified
++ kube::test::get_object_assert pods '{{range.items}}{{.metadata.name}}:{{end}}' valid-pod:
++ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.name}}:{{end}}' valid-pod:
++ local tries=1
++ local object=pods
++ local 'request={{range.items}}{{.metadata.name}}:{{end}}'
++ local expected=valid-pod:
... skipping 39 lines ...
+++ echo core.sh:194
++ echo 'core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B++ echo -n '(B'
++ return 0
++ kubectl delete --all pods '-lname in (valid-pod)' -s http://127.0.0.1:8080 --match-server-version
error: setting 'all' parameter but found a non empty selector. 
++ kube::test::get_object_assert pods '{{range.items}}{{.metadata.name}}:{{end}}' valid-pod:
++ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.name}}:{{end}}' valid-pod:
++ local tries=1
++ local object=pods
++ local 'request={{range.items}}{{.metadata.name}}:{{end}}'
++ local expected=valid-pod:
... skipping 349 lines ...
+++ echo core.sh:255
++ echo 'core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%'
++ echo -n '(B'
core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(B++ return 0
++ kubectl create pdb test-pdb --selector=app=rails --min-available=2 --max-unavailable=3 --namespace=test-kubectl-describe-pod
error: min-available and max-unavailable cannot be both specified
++ kube::test::get_object_assert 'pods --namespace=test-kubectl-describe-pod' '{{range.items}}{{.metadata.name}}:{{end}}' ''
++ kube::test::object_assert 1 'pods --namespace=test-kubectl-describe-pod' '{{range.items}}{{.metadata.name}}:{{end}}' ''
++ local tries=1
++ local 'object=pods --namespace=test-kubectl-describe-pod'
++ local 'request={{range.items}}{{.metadata.name}}:{{end}}'
++ local expected=
... skipping 2639 lines ...
+++ kube::test::get_caller 3
+++ local levels=3
+++ local caller_file=/home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ local caller_line=434
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:434
++ echo 'core.sh:434: FAIL!'
core.sh:434: FAIL!
++ echo 'Get pods {{range.items}}{{.metadata.name}}:{{end}}'
++ echo '  Expected: '
++ echo '  Got:      modified-snmjt:'
Get pods {{range.items}}{{.metadata.name}}:{{end}}
  Expected: 
  Got:      modified-snmjt:
... skipping 601 lines ...
| core.sh:422: Successful get service {{range.items}}{{.metadata.name}}:{{end}}: modified:
| (Bcore.sh:423: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: modified:
| (Bservice "modified" deleted
| replicationcontroller "modified" deleted
| Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: , got: modified-snmjt:
| 
| core.sh:434: FAIL!
| Get pods {{range.items}}{{.metadata.name}}:{{end}}
|   Expected: 
|   Got:      modified-snmjt:
| (B
| 51 /home/prow/go/src/k8s.io/kubernetes/hack/lib/test.sh
| (B
+++ exit code: 1'
+ '[' 1 = 0 -a -n '' ']'
+ [[ 1 != 0 ]]
+ echo '+++ error: 1'
+ tee -a /var/tmp/ju23812.txt
+++ error: 1
+ rm -f /var/tmp/ju23812.txt
++ cat /var/tmp/ju23812-err.txt
+ errMsg='+ eVal run_pod_tests
+ tee -a /var/tmp/ju23812.txt
+ eval run_pod_tests
++ run_pod_tests
... skipping 1864 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:186
++ echo '\''core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete pods -s http://127.0.0.1:8080 --match-server-version
error: resource(s) were provided, but no name, label selector, or --all flag specified
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ local tries=1
++ local object=pods
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=valid-pod:
... skipping 35 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:194
++ echo '\''core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete --all pods '\''-lname in (valid-pod)'\'' -s http://127.0.0.1:8080 --match-server-version
error: setting '\''all'\'' parameter but found a non empty selector. 
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ local tries=1
++ local object=pods
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=valid-pod:
... skipping 320 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:255
++ echo '\''core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl create pdb test-pdb --selector=app=rails --min-available=2 --max-unavailable=3 --namespace=test-kubectl-describe-pod
error: min-available and max-unavailable cannot be both specified
++ kube::test::get_object_assert '\''pods --namespace=test-kubectl-describe-pod'\'' '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 '\''pods --namespace=test-kubectl-describe-pod'\'' '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
++ local '\''object=pods --namespace=test-kubectl-describe-pod'\''
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=
... skipping 2448 lines ...
+++ kube::test::get_caller 3
+++ local levels=3
+++ local caller_file=/home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ local caller_line=434
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:434
++ echo '\''core.sh:434: FAIL!'\''
++ echo '\''Get pods {{range.items}}{{.metadata.name}}:{{end}}'\''
++ echo '\''  Expected: '\''
++ echo '\''  Got:      modified-snmjt:'\''
++ echo '\''(B'\''
++ caller
++ echo '\''(B'\''
... skipping 25 lines ...
+ time=32.5671
++ echo '0 32.5671'
++ awk '{print $1 + $2}'
+ total=32.5671
+ [[ 1 = 0 ]]
+ failure='
      <failure type="ScriptError" message="Script Error"><![CDATA[+ eVal run_pod_tests
+ tee -a /var/tmp/ju23812.txt
+ eval run_pod_tests
++ run_pod_tests
++ set -o nounset
++ set -o errexit
++ kube::log::status '\''Testing kubectl(v1:pods)'\''
... skipping 1861 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:186
++ echo '\''core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete pods -s http://127.0.0.1:8080 --match-server-version
error: resource(s) were provided, but no name, label selector, or --all flag specified
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ local tries=1
++ local object=pods
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=valid-pod:
... skipping 35 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:194
++ echo '\''core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete --all pods '\''-lname in (valid-pod)'\'' -s http://127.0.0.1:8080 --match-server-version
error: setting '\''all'\'' parameter but found a non empty selector. 
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ local tries=1
++ local object=pods
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=valid-pod:
... skipping 320 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:255
++ echo '\''core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl create pdb test-pdb --selector=app=rails --min-available=2 --max-unavailable=3 --namespace=test-kubectl-describe-pod
error: min-available and max-unavailable cannot be both specified
++ kube::test::get_object_assert '\''pods --namespace=test-kubectl-describe-pod'\'' '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 '\''pods --namespace=test-kubectl-describe-pod'\'' '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
++ local '\''object=pods --namespace=test-kubectl-describe-pod'\''
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=
... skipping 2448 lines ...
+++ kube::test::get_caller 3
+++ local levels=3
+++ local caller_file=/home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ local caller_line=434
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:434
++ echo '\''core.sh:434: FAIL!'\''
++ echo '\''Get pods {{range.items}}{{.metadata.name}}:{{end}}'\''
++ echo '\''  Expected: '\''
++ echo '\''  Got:      modified-snmjt:'\''
++ echo '\''(B'\''
++ caller
++ echo '\''(B'\''
... skipping 18 lines ...
+ echo 1
+ tr -d '\''\n'\'']]></failure>
  '
+ content='
    <testcase assertions="1" name="run_pod_tests" time="32.5671" classname="test-cmd">
    
      <failure type="ScriptError" message="Script Error"><![CDATA[+ eVal run_pod_tests
+ tee -a /var/tmp/ju23812.txt
+ eval run_pod_tests
++ run_pod_tests
++ set -o nounset
++ set -o errexit
++ kube::log::status '\''Testing kubectl(v1:pods)'\''
... skipping 1861 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:186
++ echo '\''core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete pods -s http://127.0.0.1:8080 --match-server-version
error: resource(s) were provided, but no name, label selector, or --all flag specified
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ local tries=1
++ local object=pods
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=valid-pod:
... skipping 35 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:194
++ echo '\''core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete --all pods '\''-lname in (valid-pod)'\'' -s http://127.0.0.1:8080 --match-server-version
error: setting '\''all'\'' parameter but found a non empty selector. 
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ local tries=1
++ local object=pods
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=valid-pod:
... skipping 320 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:255
++ echo '\''core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl create pdb test-pdb --selector=app=rails --min-available=2 --max-unavailable=3 --namespace=test-kubectl-describe-pod
error: min-available and max-unavailable cannot be both specified
++ kube::test::get_object_assert '\''pods --namespace=test-kubectl-describe-pod'\'' '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 '\''pods --namespace=test-kubectl-describe-pod'\'' '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
++ local '\''object=pods --namespace=test-kubectl-describe-pod'\''
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=
... skipping 2448 lines ...
+++ kube::test::get_caller 3
+++ local levels=3
+++ local caller_file=/home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ local caller_line=434
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:434
++ echo '\''core.sh:434: FAIL!'\''
++ echo '\''Get pods {{range.items}}{{.metadata.name}}:{{end}}'\''
++ echo '\''  Expected: '\''
++ echo '\''  Got:      modified-snmjt:'\''
++ echo '\''(B'\''
++ caller
++ echo '\''(B'\''
... skipping 1889 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:186
++ echo '\''core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete pods -s http://127.0.0.1:8080 --match-server-version
error: resource(s) were provided, but no name, label selector, or --all flag specified
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ local tries=1
++ local object=pods
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=valid-pod:
... skipping 35 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:194
++ echo '\''core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete --all pods '\''-lname in (valid-pod)'\'' -s http://127.0.0.1:8080 --match-server-version
error: setting '\''all'\'' parameter but found a non empty selector. 
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' valid-pod:
++ local tries=1
++ local object=pods
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=valid-pod:
... skipping 320 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:255
++ echo '\''core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%'\''
++ echo -n '\''(B'\''
++ return 0
++ kubectl create pdb test-pdb --selector=app=rails --min-available=2 --max-unavailable=3 --namespace=test-kubectl-describe-pod
error: min-available and max-unavailable cannot be both specified
++ kube::test::get_object_assert '\''pods --namespace=test-kubectl-describe-pod'\'' '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 '\''pods --namespace=test-kubectl-describe-pod'\'' '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
++ local '\''object=pods --namespace=test-kubectl-describe-pod'\''
++ local '\''request={{range.items}}{{.metadata.name}}:{{end}}'\''
++ local expected=
... skipping 2448 lines ...
+++ kube::test::get_caller 3
+++ local levels=3
+++ local caller_file=/home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ local caller_line=434
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
+++ echo core.sh:434
++ echo '\''core.sh:434: FAIL!'\''
++ echo '\''Get pods {{range.items}}{{.metadata.name}}:{{end}}'\''
++ echo '\''  Expected: '\''
++ echo '\''  Got:      modified-snmjt:'\''
++ echo '\''(B'\''
++ caller
++ echo '\''(B'\''
... skipping 32 lines ...
++ kube::log::errexit
++ local err=1
++ set +o
++ grep -qe '-o errexit'
++ return
+ [[ 1 -ne 0 ]]
+ echo 'Error when running run_pod_tests'
Error when running run_pod_tests
+ foundError='run_kubectl_version_tests, run_pod_tests, '
+ set -o nounset
+ set -o errexit
+ kube::test::if_supports_resource pods
+ SUPPORTED_RESOURCES='*'
+ REQUIRED_RESOURCE=pods
... skipping 1145 lines ...
++ shift
+++ [0315 19:05:24] Creating namespace namespace-1552676724-5994
++ kubectl create namespace namespace-1552676724-5994
namespace/namespace-1552676724-5994 created
++ kubectl config set-context test --namespace=namespace-1552676724-5994
Context "test" modified.
++ kube::log::status 'Testing kubectl create with error'
++ local V=0
++ [[ 1 < 0 ]]
+++ date '+[%m%d %H:%M:%S]'
++ timestamp='[0315 19:05:24]'
++ echo '+++ [0315 19:05:24] Testing kubectl create with error'
+++ [0315 19:05:24] Testing kubectl create with error
++ shift
++ kubectl create
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
++ ERROR_FILE=/tmp/tmp.wH96EXXO7X/validation-error
++ kubectl create -f hack/testdata/invalid-rc-with-empty-args.yaml -s http://127.0.0.1:8080 --match-server-version
I0315 19:05:25.076426   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:05:25.076636   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
++ true
++ grep -q 'unknown object type "nil" in ReplicationController' /tmp/tmp.wH96EXXO7X/validation-error
+++ cat /tmp/tmp.wH96EXXO7X/validation-error
++ kube::log::status '"kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false'
++ local V=0
++ [[ 1 < 0 ]]
+++ date '+[%m%d %H:%M:%S]'
++ timestamp='[0315 19:05:25]'
++ echo '+++ [0315 19:05:25] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false'
++ shift
+++ [0315 19:05:25] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
++ rm /tmp/tmp.wH96EXXO7X/validation-error
+++ kubectl convert -f test/fixtures/doc-yaml/admin/limitrange/valid-pod.yaml -o json
+++ kubectl create -s http://127.0.0.1:8080 --match-server-version --raw /api/v1/namespaces -f - --v=8
+++ grep 'cannot be handled as a Namespace: converting (v1.Pod)'
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
++++ kube::log::errexit
++++ local 'err=0 255 0'
++++ set +o
++++ grep -qe '-o errexit'
++++ return
++ '[' 'I0315 19:05:25.634353   57214 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod in version \"v1\" cannot be handled as a Namespace: converting (v1.Pod).v1.PodSpec to (core.Namespace).core.NamespaceSpec: Finalizers not present in src","reason":"BadRequest","code":400}
  "message": "Pod in version \"v1\" cannot be handled as a Namespace: converting (v1.Pod).v1.PodSpec to (core.Namespace).core.NamespaceSpec: Finalizers not present in src",
F0315 19:05:25.634978   57214 helpers.go:114] Error from server (BadRequest): Pod in version "v1" cannot be handled as a Namespace: converting (v1.Pod).v1.PodSpec to (core.Namespace).core.NamespaceSpec: Finalizers not present in src' ']'
+++ kubectl create -s http://127.0.0.1:8080 --match-server-version --raw /api/v1/namespaces -f test/fixtures/doc-yaml/admin/limitrange/valid-pod.yaml --edit
+++ grep 'raw and --edit are mutually exclusive'
++++ kube::log::errexit
++++ local 'err=1 0'
++++ set +o
++++ grep -qe '-o errexit'
++++ return
++ '[' 'error: --raw and --edit are mutually exclusive' ']'
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\n'
++ cat /tmp/evErr.23812.log
+ evErr=0
... skipping 11 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0315 19:05:24] Creating namespace namespace-1552676724-5994
| namespace/namespace-1552676724-5994 created
| Context "test" modified.
+++ [0315 19:05:24] Testing kubectl create with error
+++ [0315 19:05:25] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
+++ exit code: 0'
+ '[' 0 = 0 -a -n '' ']'
+ [[ 0 != 0 ]]
+ rm -f /var/tmp/ju23812.txt
++ cat /var/tmp/ju23812-err.txt
+ errMsg='+ eVal run_kubectl_create_error_tests
... skipping 12 lines ...
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:05:24]'\''
++ echo '\''+++ [0315 19:05:24] Creating namespace namespace-1552676724-5994'\''
++ shift
++ kubectl create namespace namespace-1552676724-5994
++ kubectl config set-context test --namespace=namespace-1552676724-5994
++ kube::log::status '\''Testing kubectl create with error'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:05:24]'\''
++ echo '\''+++ [0315 19:05:24] Testing kubectl create with error'\''
++ shift
++ kubectl create
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
++ ERROR_FILE=/tmp/tmp.wH96EXXO7X/validation-error
++ kubectl create -f hack/testdata/invalid-rc-with-empty-args.yaml -s http://127.0.0.1:8080 --match-server-version
++ true
++ grep -q '\''unknown object type "nil" in ReplicationController'\'' /tmp/tmp.wH96EXXO7X/validation-error
+++ cat /tmp/tmp.wH96EXXO7X/validation-error
++ kube::log::status '\''"kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:05:25]'\''
++ echo '\''+++ [0315 19:05:25] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false'\''
++ shift
++ rm /tmp/tmp.wH96EXXO7X/validation-error
+++ kubectl convert -f test/fixtures/doc-yaml/admin/limitrange/valid-pod.yaml -o json
+++ kubectl create -s http://127.0.0.1:8080 --match-server-version --raw /api/v1/namespaces -f - --v=8
+++ grep '\''cannot be handled as a Namespace: converting (v1.Pod)'\''
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
++++ kube::log::errexit
++++ local '\''err=0 255 0'\''
++++ set +o
++++ grep -qe '\''-o errexit'\''
++++ return
++ '\''['\'' '\''I0315 19:05:25.634353   57214 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod in version \"v1\" cannot be handled as a Namespace: converting (v1.Pod).v1.PodSpec to (core.Namespace).core.NamespaceSpec: Finalizers not present in src","reason":"BadRequest","code":400}
  "message": "Pod in version \"v1\" cannot be handled as a Namespace: converting (v1.Pod).v1.PodSpec to (core.Namespace).core.NamespaceSpec: Finalizers not present in src",
F0315 19:05:25.634978   57214 helpers.go:114] Error from server (BadRequest): Pod in version "v1" cannot be handled as a Namespace: converting (v1.Pod).v1.PodSpec to (core.Namespace).core.NamespaceSpec: Finalizers not present in src'\'' '\'']'\''
+++ kubectl create -s http://127.0.0.1:8080 --match-server-version --raw /api/v1/namespaces -f test/fixtures/doc-yaml/admin/limitrange/valid-pod.yaml --edit
+++ grep '\''raw and --edit are mutually exclusive'\''
++++ kube::log::errexit
++++ local '\''err=1 0'\''
++++ set +o
++++ grep -qe '\''-o errexit'\''
++++ return
++ '\''['\'' '\''error: --raw and --edit are mutually exclusive'\'' '\'']'\''
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\''\n'\'''
+ rm -f /var/tmp/ju23812-err.txt
+ asserts=1
... skipping 25 lines ...
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:05:24]'\''
++ echo '\''+++ [0315 19:05:24] Creating namespace namespace-1552676724-5994'\''
++ shift
++ kubectl create namespace namespace-1552676724-5994
++ kubectl config set-context test --namespace=namespace-1552676724-5994
++ kube::log::status '\''Testing kubectl create with error'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:05:24]'\''
++ echo '\''+++ [0315 19:05:24] Testing kubectl create with error'\''
++ shift
++ kubectl create
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
++ ERROR_FILE=/tmp/tmp.wH96EXXO7X/validation-error
++ kubectl create -f hack/testdata/invalid-rc-with-empty-args.yaml -s http://127.0.0.1:8080 --match-server-version
++ true
++ grep -q '\''unknown object type "nil" in ReplicationController'\'' /tmp/tmp.wH96EXXO7X/validation-error
+++ cat /tmp/tmp.wH96EXXO7X/validation-error
++ kube::log::status '\''"kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:05:25]'\''
++ echo '\''+++ [0315 19:05:25] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false'\''
++ shift
++ rm /tmp/tmp.wH96EXXO7X/validation-error
+++ kubectl convert -f test/fixtures/doc-yaml/admin/limitrange/valid-pod.yaml -o json
+++ kubectl create -s http://127.0.0.1:8080 --match-server-version --raw /api/v1/namespaces -f - --v=8
+++ grep '\''cannot be handled as a Namespace: converting (v1.Pod)'\''
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
++++ kube::log::errexit
++++ local '\''err=0 255 0'\''
++++ set +o
++++ grep -qe '\''-o errexit'\''
++++ return
++ '\''['\'' '\''I0315 19:05:25.634353   57214 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod in version \"v1\" cannot be handled as a Namespace: converting (v1.Pod).v1.PodSpec to (core.Namespace).core.NamespaceSpec: Finalizers not present in src","reason":"BadRequest","code":400}
  "message": "Pod in version \"v1\" cannot be handled as a Namespace: converting (v1.Pod).v1.PodSpec to (core.Namespace).core.NamespaceSpec: Finalizers not present in src",
F0315 19:05:25.634978   57214 helpers.go:114] Error from server (BadRequest): Pod in version "v1" cannot be handled as a Namespace: converting (v1.Pod).v1.PodSpec to (core.Namespace).core.NamespaceSpec: Finalizers not present in src'\'' '\'']'\''
+++ kubectl create -s http://127.0.0.1:8080 --match-server-version --raw /api/v1/namespaces -f test/fixtures/doc-yaml/admin/limitrange/valid-pod.yaml --edit
+++ grep '\''raw and --edit are mutually exclusive'\''
++++ kube::log::errexit
++++ local '\''err=1 0'\''
++++ set +o
++++ grep -qe '\''-o errexit'\''
++++ return
++ '\''['\'' '\''error: --raw and --edit are mutually exclusive'\'' '\'']'\''
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\''\n'\'']]></system-err>
    </testcase>
  '
... skipping 273 lines ...
I0315 19:05:34.604289   47212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 19:05:34.604389   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:05:34.604969   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:05:34.608306   47212 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
++ kubectl -s http://127.0.0.1:8080 --match-server-version get resource/myobj
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
++ kubectl -s http://127.0.0.1:8080 --match-server-version delete customresourcedefinition resources.mygroup.example.com
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\n'
... skipping 183 lines ...
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete -f hack/testdata/pod.yaml -s http://127.0.0.1:8080 --match-server-version
++ kubectl -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version create -f -
++ kubectl -s http://127.0.0.1:8080 --match-server-version apply --experimental-server-side --server-dry-run -f hack/testdata/CRD/resource.yaml -s http://127.0.0.1:8080 --match-server-version
++ kubectl -s http://127.0.0.1:8080 --match-server-version get resource/myobj
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
++ kubectl -s http://127.0.0.1:8080 --match-server-version delete customresourcedefinition resources.mygroup.example.com
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\''\n'\'''
+ rm -f /var/tmp/ju23812-err.txt
... skipping 157 lines ...
++ echo -n '\''(B'\''
++ return 0
++ kubectl delete -f hack/testdata/pod.yaml -s http://127.0.0.1:8080 --match-server-version
++ kubectl -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version create -f -
++ kubectl -s http://127.0.0.1:8080 --match-server-version apply --experimental-server-side --server-dry-run -f hack/testdata/CRD/resource.yaml -s http://127.0.0.1:8080 --match-server-version
++ kubectl -s http://127.0.0.1:8080 --match-server-version get resource/myobj
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
++ kubectl -s http://127.0.0.1:8080 --match-server-version delete customresourcedefinition resources.mygroup.example.com
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\''\n'\'']]></system-err>
    </testcase>
... skipping 2247 lines ...
+++ echo create.sh:34
++ echo 'create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod'
create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
++ echo -n '(B'
(B++ return 0
+++ kubectl get pods selector-test-pod-dont-apply -s http://127.0.0.1:8080 --match-server-version
++ output_message='Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'
++ kube::test::if_has_string 'Error from server (NotFound): pods "selector-test-pod-dont-apply" not found' 'pods "selector-test-pod-dont-apply" not found'
++ local 'message=Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'
++ local 'match=pods "selector-test-pod-dont-apply" not found'
++ grep -q 'pods "selector-test-pod-dont-apply" not found'
++ echo Successful
++ echo 'message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'
++ echo 'has:pods "selector-test-pod-dont-apply" not found'
++ return 0
Successful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
++ kubectl delete pods selector-test-pod
pod "selector-test-pod" deleted
++ set +o nounset
++ set +o errexit
+ echo 0
... skipping 19 lines ...
| Context "test" modified.
+++ [0315 19:05:38] Testing kubectl create filter
| create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
| (Bpod/selector-test-pod created
| create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
| (BSuccessful
| message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
| has:pods "selector-test-pod-dont-apply" not found
| pod "selector-test-pod" deleted
+++ exit code: 0'
+ '[' 0 = 0 -a -n '' ']'
+ [[ 0 != 0 ]]
+ rm -f /var/tmp/ju23812.txt
... skipping 69 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/create.sh
+++ echo create.sh:34
++ echo '\''create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod'\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get pods selector-test-pod-dont-apply -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'\'' '\''pods "selector-test-pod-dont-apply" not found'\''
++ local '\''message=Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'\''
++ local '\''match=pods "selector-test-pod-dont-apply" not found'\''
++ grep -q '\''pods "selector-test-pod-dont-apply" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'\''
++ echo '\''has:pods "selector-test-pod-dont-apply" not found'\''
++ return 0
++ kubectl delete pods selector-test-pod
++ set +o nounset
++ set +o errexit
+ echo 0
... skipping 83 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/create.sh
+++ echo create.sh:34
++ echo '\''create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod'\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get pods selector-test-pod-dont-apply -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'\'' '\''pods "selector-test-pod-dont-apply" not found'\''
++ local '\''message=Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'\''
++ local '\''match=pods "selector-test-pod-dont-apply" not found'\''
++ grep -q '\''pods "selector-test-pod-dont-apply" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found'\''
++ echo '\''has:pods "selector-test-pod-dont-apply" not found'\''
++ return 0
++ kubectl delete pods selector-test-pod
++ set +o nounset
++ set +o errexit
+ echo 0
... skipping 450 lines ...
++ kubectl delete deployments,rs,pods --all --cascade=false --grace-period=0
deployment.extensions "my-depl" deleted
I0315 19:05:43.571083   47212 controller.go:606] quota admission added evaluator for: replicasets.extensions
replicaset.extensions "my-depl-64775887d7" deleted
replicaset.extensions "my-depl-656cffcbcc" deleted
pod "my-depl-64775887d7-n5mqd" deleted
E0315 19:05:43.827032   50145 replica_set.go:450] Sync "namespace-1552676740-11421/my-depl-64775887d7" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-64775887d7": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1552676740-11421/my-depl-64775887d7, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 5450a67c-4755-11e9-93be-5aefd4b81fff, UID in object meta: 
E0315 19:05:43.864289   50145 replica_set.go:450] Sync "namespace-1552676740-11421/my-depl-64775887d7" failed with replicasets.apps "my-depl-64775887d7" not found
pod "my-depl-656cffcbcc-vftst" deleted
++ kube::test::wait_object_assert deployments '{{range.items}}{{.metadata.name}}:{{end}}' ''
++ kube::test::object_assert 10 deployments '{{range.items}}{{.metadata.name}}:{{end}}' ''
++ local tries=10
++ local object=deployments
++ local 'request={{range.items}}{{.metadata.name}}:{{end}}'
... skipping 130 lines ...
I0315 19:05:48.088865   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:05:48.089129   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:05:49.089337   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:05:49.089463   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:05:50.089791   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:05:50.090057   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
++ output_message='Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\x01'\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\x03'\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\x01'\'' "maxUnavailable":'\''\x01'\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''P'\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\x1e'\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\x01'\'' "replicas":'\''\x03'\'' "unavailableReplicas":'\''\x03'\'' "updatedReplicas":'\''\x03'\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'
++ kube::test::if_has_string 'Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\x01'\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\x03'\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\x01'\'' "maxUnavailable":'\''\x01'\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''P'\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\x1e'\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\x01'\'' "replicas":'\''\x03'\'' "unavailableReplicas":'\''\x03'\'' "updatedReplicas":'\''\x03'\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again' 'Error from server (Conflict)'
++ local 'message=Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\x01'\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\x03'\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\x01'\'' "maxUnavailable":'\''\x01'\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''P'\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\x1e'\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\x01'\'' "replicas":'\''\x03'\'' "unavailableReplicas":'\''\x03'\'' "updatedReplicas":'\''\x03'\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'
++ local 'match=Error from server (Conflict)'
++ grep -q 'Error from server (Conflict)'
++ echo Successful
Successful
++ echo 'message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\x01'\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\x03'\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\x01'\'' "maxUnavailable":'\''\x01'\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''P'\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\x1e'\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\x01'\'' "replicas":'\''\x03'\'' "unavailableReplicas":'\''\x03'\'' "updatedReplicas":'\''\x03'\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'
++ echo 'has:Error from server (Conflict)'
++ return 0
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
++ kubectl apply -f hack/testdata/deployment-label-change2.yaml --overwrite=true --force=true --grace-period=10
I0315 19:05:51.090474   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:05:51.090753   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:05:52.091171   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:05:52.091487   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:05:53.091767   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
... skipping 129 lines ...
| (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
| (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
| (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
| (Bdeployment.extensions/nginx created
| apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
| (BSuccessful
| message:Error from server (Conflict): error when applying patch:
| {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
| to:
| Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
| Name: "nginx", Namespace: "namespace-1552676740-11421"
| Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\x01'\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\x03'\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\x01'\'' "maxUnavailable":'\''\x01'\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''P'\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\x1e'\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\x01'\'' "replicas":'\''\x03'\'' "unavailableReplicas":'\''\x03'\'' "updatedReplicas":'\''\x03'\'']]}
| for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
| has:Error from server (Conflict)
| deployment.extensions/nginx configured
| Successful
| message:        "name": "nginx2"
|           "name": "nginx2"
| has:"name": "nginx2"
| Successful
... skipping 448 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/apps.sh
+++ echo apps.sh:147
++ echo '\''apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx'\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl apply -f hack/testdata/deployment-label-change2.yaml -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\'\'''\''\x01'\''\'\'''\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\'\'''\''\x01'\''\'\'''\'' "maxUnavailable":'\''\'\'''\''\x01'\''\'\'''\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''\'\'''\''P'\''\'\'''\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\'\'''\''\x1e'\''\'\'''\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\'\'''\''\x01'\''\'\'''\'' "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "unavailableReplicas":'\''\'\'''\''\x03'\''\'\'''\'' "updatedReplicas":'\''\'\'''\''\x03'\''\'\'''\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'\''
++ kube::test::if_has_string '\''Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\'\'''\''\x01'\''\'\'''\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\'\'''\''\x01'\''\'\'''\'' "maxUnavailable":'\''\'\'''\''\x01'\''\'\'''\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''\'\'''\''P'\''\'\'''\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\'\'''\''\x1e'\''\'\'''\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\'\'''\''\x01'\''\'\'''\'' "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "unavailableReplicas":'\''\'\'''\''\x03'\''\'\'''\'' "updatedReplicas":'\''\'\'''\''\x03'\''\'\'''\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'\'' '\''Error from server (Conflict)'\''
++ local '\''message=Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\'\'''\''\x01'\''\'\'''\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\'\'''\''\x01'\''\'\'''\'' "maxUnavailable":'\''\'\'''\''\x01'\''\'\'''\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''\'\'''\''P'\''\'\'''\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\'\'''\''\x1e'\''\'\'''\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\'\'''\''\x01'\''\'\'''\'' "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "unavailableReplicas":'\''\'\'''\''\x03'\''\'\'''\'' "updatedReplicas":'\''\'\'''\''\x03'\''\'\'''\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'\''
++ local '\''match=Error from server (Conflict)'\''
++ grep -q '\''Error from server (Conflict)'\''
++ echo Successful
++ echo '\''message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\'\'''\''\x01'\''\'\'''\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\'\'''\''\x01'\''\'\'''\'' "maxUnavailable":'\''\'\'''\''\x01'\''\'\'''\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''\'\'''\''P'\''\'\'''\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\'\'''\''\x1e'\''\'\'''\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\'\'''\''\x01'\''\'\'''\'' "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "unavailableReplicas":'\''\'\'''\''\x03'\''\'\'''\'' "updatedReplicas":'\''\'\'''\''\x03'\''\'\'''\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'\''
++ echo '\''has:Error from server (Conflict)'\''
++ return 0
++ kubectl apply -f hack/testdata/deployment-label-change2.yaml --overwrite=true --force=true --grace-period=10
+++ kubectl apply view-last-applied deploy/nginx -o json -s http://127.0.0.1:8080 --match-server-version
+++ grep nginx2
++ output_message='\''        "name": "nginx2"
          "name": "nginx2"'\''
... skipping 502 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/apps.sh
+++ echo apps.sh:147
++ echo '\''apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx'\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl apply -f hack/testdata/deployment-label-change2.yaml -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\'\'''\''\x01'\''\'\'''\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\'\'''\''\x01'\''\'\'''\'' "maxUnavailable":'\''\'\'''\''\x01'\''\'\'''\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''\'\'''\''P'\''\'\'''\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\'\'''\''\x1e'\''\'\'''\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\'\'''\''\x01'\''\'\'''\'' "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "unavailableReplicas":'\''\'\'''\''\x03'\''\'\'''\'' "updatedReplicas":'\''\'\'''\''\x03'\''\'\'''\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'\''
++ kube::test::if_has_string '\''Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\'\'''\''\x01'\''\'\'''\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\'\'''\''\x01'\''\'\'''\'' "maxUnavailable":'\''\'\'''\''\x01'\''\'\'''\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''\'\'''\''P'\''\'\'''\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\'\'''\''\x1e'\''\'\'''\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\'\'''\''\x01'\''\'\'''\'' "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "unavailableReplicas":'\''\'\'''\''\x03'\''\'\'''\'' "updatedReplicas":'\''\'\'''\''\x03'\''\'\'''\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'\'' '\''Error from server (Conflict)'\''
++ local '\''message=Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\'\'''\''\x01'\''\'\'''\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\'\'''\''\x01'\''\'\'''\'' "maxUnavailable":'\''\'\'''\''\x01'\''\'\'''\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''\'\'''\''P'\''\'\'''\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\'\'''\''\x1e'\''\'\'''\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\'\'''\''\x01'\''\'\'''\'' "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "unavailableReplicas":'\''\'\'''\''\x03'\''\'\'''\'' "updatedReplicas":'\''\'\'''\''\x03'\''\'\'''\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'\''
++ local '\''match=Error from server (Conflict)'\''
++ grep -q '\''Error from server (Conflict)'\''
++ echo Successful
++ echo '\''message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1552676740-11421"
Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552676740-11421\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-03-15T19:05:45Z" "generation":'\''\'\'''\''\x01'\''\'\'''\'' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-03-15T19:05:45Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-03-15T19:05:45Z"]] "name":"nginx" "namespace":"namespace-1552676740-11421" "resourceVersion":"607" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552676740-11421/deployments/nginx" "uid":"56070828-4755-11e9-93be-5aefd4b81fff"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\''\'\'''\''\x01'\''\'\'''\'' "maxUnavailable":'\''\'\'''\''\x01'\''\'\'''\''] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'\''\'\'''\''P'\''\'\'''\'' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\''\'\'''\''\x1e'\''\'\'''\'']]] "status":map["conditions":[map["lastTransitionTime":"2019-03-15T19:05:45Z" "lastUpdateTime":"2019-03-15T19:05:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\''\'\'''\''\x01'\''\'\'''\'' "replicas":'\''\'\'''\''\x03'\''\'\'''\'' "unavailableReplicas":'\''\'\'''\''\x03'\''\'\'''\'' "updatedReplicas":'\''\'\'''\''\x03'\''\'\'''\'']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again'\''
++ echo '\''has:Error from server (Conflict)'\''
++ return 0
++ kubectl apply -f hack/testdata/deployment-label-change2.yaml --overwrite=true --force=true --grace-period=10
+++ kubectl apply view-last-applied deploy/nginx -o json -s http://127.0.0.1:8080 --match-server-version
+++ grep nginx2
++ output_message='\''        "name": "nginx2"
          "name": "nginx2"'\''
... skipping 2238 lines ...
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
++ echo -n '(B'
(B++ return 0
+++ kubectl get pods abc -s http://127.0.0.1:8080 --match-server-version
I0315 19:06:06.098688   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:06.098866   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
++ output_message='Error from server (NotFound): pods "abc" not found'
++ kube::test::if_has_string 'Error from server (NotFound): pods "abc" not found' 'pods "abc" not found'
++ local 'message=Error from server (NotFound): pods "abc" not found'
++ local 'match=pods "abc" not found'
++ grep -q 'pods "abc" not found'
++ echo Successful
Successful
++ echo 'message:Error from server (NotFound): pods "abc" not found'
++ echo 'has:pods "abc" not found'
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
++ return 0
++ kube::test::get_object_assert pods '{{range.items}}{{.metadata.name}}:{{end}}' ''
++ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.name}}:{{end}}' ''
++ local tries=1
++ local object=pods
... skipping 15 lines ...
+++ echo get.sh:37
++ echo 'get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
++ echo -n '(B'
++ return 0
(B+++ kubectl get pods abc -s http://127.0.0.1:8080 --match-server-version -o name
++ output_message='Error from server (NotFound): pods "abc" not found'
++ kube::test::if_has_string 'Error from server (NotFound): pods "abc" not found' 'pods "abc" not found'
++ local 'message=Error from server (NotFound): pods "abc" not found'
++ local 'match=pods "abc" not found'
++ grep -q 'pods "abc" not found'
++ echo Successful
++ echo 'message:Error from server (NotFound): pods "abc" not found'
++ echo 'has:pods "abc" not found'
++ return 0
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
++ kube::test::get_object_assert pods '{{range.items}}{{.metadata.name}}:{{end}}' ''
++ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.name}}:{{end}}' ''
++ local tries=1
++ local object=pods
++ local 'request={{range.items}}{{.metadata.name}}:{{end}}'
... skipping 189 lines ...
I0315 19:06:08.099633   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:08.099790   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:06:09.100103   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:09.100349   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:06:10.100773   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:10.101002   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
++ output_message='error: the server doesn'\''t have a resource type "foobar"'
++ kube::test::if_has_not_string 'error: the server doesn'\''t have a resource type "foobar"' 'No resources found'
++ local 'message=error: the server doesn'\''t have a resource type "foobar"'
++ local 'match=No resources found'
++ grep -q 'No resources found'
++ echo Successful
++ echo 'message:error: the server doesn'\''t have a resource type "foobar"'
++ echo 'has not:No resources found'
Successful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
++ return 0
+++ kubectl get pods -s http://127.0.0.1:8080 --match-server-version
++ output_message='No resources found.'
++ kube::test::if_has_string 'No resources found.' 'No resources found'
++ local 'message=No resources found.'
... skipping 54 lines ...
+++ echo get.sh:93
++ echo 'get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
++ echo -n '(B'
++ return 0
(B+++ kubectl get pods abc -s http://127.0.0.1:8080 --match-server-version -o json
++ output_message='Error from server (NotFound): pods "abc" not found'
++ kube::test::if_has_string 'Error from server (NotFound): pods "abc" not found' 'pods "abc" not found'
++ local 'message=Error from server (NotFound): pods "abc" not found'
++ local 'match=pods "abc" not found'
++ grep -q 'pods "abc" not found'
++ echo Successful
Successful
++ echo 'message:Error from server (NotFound): pods "abc" not found'
++ echo 'has:pods "abc" not found'
++ return 0
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
++ kube::test::if_has_string 'Error from server (NotFound): pods "abc" not found' List
++ local 'message=Error from server (NotFound): pods "abc" not found'
++ local match=List
++ grep -q List
++ echo 'FAIL!'
FAIL!
++ echo 'message:Error from server (NotFound): pods "abc" not found'
++ echo 'has not:List'
++ caller
message:Error from server (NotFound): pods "abc" not found
has not:List
99 /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
++ return 1
+++ kubectl --v=6 --namespace default get all --chunk-size=0 -s http://127.0.0.1:8080 --match-server-version
++ output_message='I0315 19:06:10.873221   59514 loader.go:359] Config loaded from file /tmp/tmp.wH96EXXO7X/.kube/config
I0315 19:06:10.874852   59514 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 2911 lines ...
Successful
message:valid-pod:
has:valid-pod:
+++ kubectl get pod valid-pod --allow-missing-template-keys=false -o 'jsonpath={.missing}' -s http://127.0.0.1:8080 --match-server-version
I0315 19:06:16.103909   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:16.104098   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
++ output_message='error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'
++ kube::test::if_has_string 'error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}' 'missing is not found'
++ local 'message=error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'
++ local 'match=missing is not found'
++ grep -q 'missing is not found'
++ echo Successful
Successful
++ echo 'message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'
++ echo 'has:missing is not found'
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
++ return 0
+++ kubectl get pod valid-pod --allow-missing-template-keys=false -o 'go-template={{.missing}}' -s http://127.0.0.1:8080 --match-server-version
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
++ output_message='Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'
++ kube::test::if_has_string 'Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]' 'map has no entry for key "missing"'
++ local 'message=Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'
++ local 'match=map has no entry for key "missing"'
++ grep -q 'map has no entry for key "missing"'
++ echo Successful
Successful
++ echo 'message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 718 lines ...
  phase: Pending
  qosClass: Guaranteed
++ echo 'has:name: valid-pod'
++ return 0
has:name: valid-pod
+++ kubectl get pods/invalid-pod -w --request-timeout=1 -s http://127.0.0.1:8080 --match-server-version
++ output_message='Error from server (NotFound): pods "invalid-pod" not found'
++ kube::test::if_has_string 'Error from server (NotFound): pods "invalid-pod" not found' '"invalid-pod" not found'
++ local 'message=Error from server (NotFound): pods "invalid-pod" not found'
++ local 'match="invalid-pod" not found'
++ grep -q '"invalid-pod" not found'
++ echo Successful
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
++ echo 'message:Error from server (NotFound): pods "invalid-pod" not found'
++ echo 'has:"invalid-pod" not found'
++ return 0
has:"invalid-pod" not found
++ kubectl delete pods valid-pod -s http://127.0.0.1:8080 --match-server-version
I0315 19:06:20.105941   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:20.106157   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
... skipping 261 lines ...
+++ [0315 19:06:05] Creating namespace namespace-1552676765-28858
| namespace/namespace-1552676765-28858 created
| Context "test" modified.
+++ [0315 19:06:05] Testing kubectl get
| get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
| (BSuccessful
| message:Error from server (NotFound): pods "abc" not found
| has:pods "abc" not found
| get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
| (BSuccessful
| message:Error from server (NotFound): pods "abc" not found
| has:pods "abc" not found
| get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
| (BSuccessful
| message:{
|     "apiVersion": "v1",
|     "items": [],
... skipping 23 lines ...
| has not:No resources found
| Successful
| message:NAME
| has not:No resources found
| get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
| (BSuccessful
| message:error: the server doesn'\''t have a resource type "foobar"
| has not:No resources found
| Successful
| message:No resources found.
| has:No resources found
| Successful
| message:
| has not:No resources found
| Successful
| message:No resources found.
| has:No resources found
| get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
| (BSuccessful
| message:Error from server (NotFound): pods "abc" not found
| has:pods "abc" not found
| FAIL!
| message:Error from server (NotFound): pods "abc" not found
| has not:List
| 99 /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
| Successful
| message:I0315 19:06:10.873221   59514 loader.go:359] Config loaded from file /tmp/tmp.wH96EXXO7X/.kube/config
| I0315 19:06:10.874852   59514 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
| I0315 19:06:10.940546   59514 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 707 lines ...
| }
| get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
| (B<no value>Successful
| message:valid-pod:
| has:valid-pod:
| Successful
| message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
| 	template was:
| 		{.missing}
| 	object given to jsonpath engine was:
| 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
| has:missing is not found
| Successful
| message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
| 	template was:
| 		{{.missing}}
| 	raw data was:
| 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
| 	object given to template engine was:
| 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 156 lines ...
|   terminationGracePeriodSeconds: 30
| status:
|   phase: Pending
|   qosClass: Guaranteed
| has:name: valid-pod
| Successful
| message:Error from server (NotFound): pods "invalid-pod" not found
| has:"invalid-pod" not found
| pod "valid-pod" deleted
| get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
| (Bpod/redis-master created
| pod/valid-pod created
| Successful
... skipping 74 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
+++ echo get.sh:29
++ echo '\''get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get pods abc -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (NotFound): pods "abc" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "abc" not found'\'' '\''pods "abc" not found'\''
++ local '\''message=Error from server (NotFound): pods "abc" not found'\''
++ local '\''match=pods "abc" not found'\''
++ grep -q '\''pods "abc" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "abc" not found'\''
++ echo '\''has:pods "abc" not found'\''
++ return 0
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
++ local object=pods
... skipping 14 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
+++ echo get.sh:37
++ echo '\''get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get pods abc -s http://127.0.0.1:8080 --match-server-version -o name
++ output_message='\''Error from server (NotFound): pods "abc" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "abc" not found'\'' '\''pods "abc" not found'\''
++ local '\''message=Error from server (NotFound): pods "abc" not found'\''
++ local '\''match=pods "abc" not found'\''
++ grep -q '\''pods "abc" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "abc" not found'\''
++ echo '\''has:pods "abc" not found'\''
++ return 0
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
++ local object=pods
... skipping 149 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
+++ echo get.sh:73
++ echo '\''get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get foobar -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''error: the server doesn'\''\'\'''\''t have a resource type "foobar"'\''
++ kube::test::if_has_not_string '\''error: the server doesn'\''\'\'''\''t have a resource type "foobar"'\'' '\''No resources found'\''
++ local '\''message=error: the server doesn'\''\'\'''\''t have a resource type "foobar"'\''
++ local '\''match=No resources found'\''
++ grep -q '\''No resources found'\''
++ echo Successful
++ echo '\''message:error: the server doesn'\''\'\'''\''t have a resource type "foobar"'\''
++ echo '\''has not:No resources found'\''
++ return 0
+++ kubectl get pods -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''No resources found.'\''
++ kube::test::if_has_string '\''No resources found.'\'' '\''No resources found'\''
++ local '\''message=No resources found.'\''
... skipping 44 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
+++ echo get.sh:93
++ echo '\''get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get pods abc -s http://127.0.0.1:8080 --match-server-version -o json
++ output_message='\''Error from server (NotFound): pods "abc" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "abc" not found'\'' '\''pods "abc" not found'\''
++ local '\''message=Error from server (NotFound): pods "abc" not found'\''
++ local '\''match=pods "abc" not found'\''
++ grep -q '\''pods "abc" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "abc" not found'\''
++ echo '\''has:pods "abc" not found'\''
++ return 0
++ kube::test::if_has_string '\''Error from server (NotFound): pods "abc" not found'\'' List
++ local '\''message=Error from server (NotFound): pods "abc" not found'\''
++ local match=List
++ grep -q List
++ echo '\''FAIL!'\''
++ echo '\''message:Error from server (NotFound): pods "abc" not found'\''
++ echo '\''has not:List'\''
++ caller
++ return 1
+++ kubectl --v=6 --namespace default get all --chunk-size=0 -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''I0315 19:06:10.873221   59514 loader.go:359] Config loaded from file /tmp/tmp.wH96EXXO7X/.kube/config
I0315 19:06:10.874852   59514 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 2183 lines ...
++ grep -q valid-pod:
++ echo Successful
++ echo message:valid-pod:
++ echo has:valid-pod:
++ return 0
+++ kubectl get pod valid-pod --allow-missing-template-keys=false -o '\''jsonpath={.missing}'\'' -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'\''
++ kube::test::if_has_string '\''error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'\'' '\''missing is not found'\''
++ local '\''message=error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'\''
++ local '\''match=missing is not found'\''
++ grep -q '\''missing is not found'\''
++ echo Successful
++ echo '\''message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'\''
++ echo '\''has:missing is not found'\''
++ return 0
+++ kubectl get pod valid-pod --allow-missing-template-keys=false -o '\''go-template={{.missing}}'\'' -s http://127.0.0.1:8080 --match-server-version
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
++ output_message='\''Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'\''
++ kube::test::if_has_string '\''Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'\'' '\''map has no entry for key "missing"'\''
++ local '\''message=Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'\''
++ local '\''match=map has no entry for key "missing"'\''
++ grep -q '\''map has no entry for key "missing"'\''
++ echo Successful
++ echo '\''message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'\''
... skipping 551 lines ...
status:
  phase: Pending
  qosClass: Guaranteed'\''
++ echo '\''has:name: valid-pod'\''
++ return 0
+++ kubectl get pods/invalid-pod -w --request-timeout=1 -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (NotFound): pods "invalid-pod" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "invalid-pod" not found'\'' '\''"invalid-pod" not found'\''
++ local '\''message=Error from server (NotFound): pods "invalid-pod" not found'\''
++ local '\''match="invalid-pod" not found'\''
++ grep -q '\''"invalid-pod" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "invalid-pod" not found'\''
++ echo '\''has:"invalid-pod" not found'\''
++ return 0
++ kubectl delete pods valid-pod -s http://127.0.0.1:8080 --match-server-version
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
... skipping 260 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
+++ echo get.sh:29
++ echo '\''get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get pods abc -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (NotFound): pods "abc" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "abc" not found'\'' '\''pods "abc" not found'\''
++ local '\''message=Error from server (NotFound): pods "abc" not found'\''
++ local '\''match=pods "abc" not found'\''
++ grep -q '\''pods "abc" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "abc" not found'\''
++ echo '\''has:pods "abc" not found'\''
++ return 0
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
++ local object=pods
... skipping 14 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
+++ echo get.sh:37
++ echo '\''get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get pods abc -s http://127.0.0.1:8080 --match-server-version -o name
++ output_message='\''Error from server (NotFound): pods "abc" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "abc" not found'\'' '\''pods "abc" not found'\''
++ local '\''message=Error from server (NotFound): pods "abc" not found'\''
++ local '\''match=pods "abc" not found'\''
++ grep -q '\''pods "abc" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "abc" not found'\''
++ echo '\''has:pods "abc" not found'\''
++ return 0
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
++ local object=pods
... skipping 149 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
+++ echo get.sh:73
++ echo '\''get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get foobar -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''error: the server doesn'\''\'\'''\''t have a resource type "foobar"'\''
++ kube::test::if_has_not_string '\''error: the server doesn'\''\'\'''\''t have a resource type "foobar"'\'' '\''No resources found'\''
++ local '\''message=error: the server doesn'\''\'\'''\''t have a resource type "foobar"'\''
++ local '\''match=No resources found'\''
++ grep -q '\''No resources found'\''
++ echo Successful
++ echo '\''message:error: the server doesn'\''\'\'''\''t have a resource type "foobar"'\''
++ echo '\''has not:No resources found'\''
++ return 0
+++ kubectl get pods -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''No resources found.'\''
++ kube::test::if_has_string '\''No resources found.'\'' '\''No resources found'\''
++ local '\''message=No resources found.'\''
... skipping 44 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
+++ echo get.sh:93
++ echo '\''get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: '\''
++ echo -n '\''(B'\''
++ return 0
+++ kubectl get pods abc -s http://127.0.0.1:8080 --match-server-version -o json
++ output_message='\''Error from server (NotFound): pods "abc" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "abc" not found'\'' '\''pods "abc" not found'\''
++ local '\''message=Error from server (NotFound): pods "abc" not found'\''
++ local '\''match=pods "abc" not found'\''
++ grep -q '\''pods "abc" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "abc" not found'\''
++ echo '\''has:pods "abc" not found'\''
++ return 0
++ kube::test::if_has_string '\''Error from server (NotFound): pods "abc" not found'\'' List
++ local '\''message=Error from server (NotFound): pods "abc" not found'\''
++ local match=List
++ grep -q List
++ echo '\''FAIL!'\''
++ echo '\''message:Error from server (NotFound): pods "abc" not found'\''
++ echo '\''has not:List'\''
++ caller
++ return 1
+++ kubectl --v=6 --namespace default get all --chunk-size=0 -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''I0315 19:06:10.873221   59514 loader.go:359] Config loaded from file /tmp/tmp.wH96EXXO7X/.kube/config
I0315 19:06:10.874852   59514 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 2183 lines ...
++ grep -q valid-pod:
++ echo Successful
++ echo message:valid-pod:
++ echo has:valid-pod:
++ return 0
+++ kubectl get pod valid-pod --allow-missing-template-keys=false -o '\''jsonpath={.missing}'\'' -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'\''
++ kube::test::if_has_string '\''error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'\'' '\''missing is not found'\''
++ local '\''message=error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'\''
++ local '\''match=missing is not found'\''
++ grep -q '\''missing is not found'\''
++ echo Successful
++ echo '\''message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-03-15T19:06:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-03-15T19:06:15Z"}}, "name":"valid-pod", "namespace":"namespace-1552676774-20994", "resourceVersion":"712", "selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod", "uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}'\''
++ echo '\''has:missing is not found'\''
++ return 0
+++ kubectl get pod valid-pod --allow-missing-template-keys=false -o '\''go-template={{.missing}}'\'' -s http://127.0.0.1:8080 --match-server-version
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
++ output_message='\''Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'\''
++ kube::test::if_has_string '\''Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'\'' '\''map has no entry for key "missing"'\''
++ local '\''message=Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'\''
++ local '\''match=map has no entry for key "missing"'\''
++ grep -q '\''map has no entry for key "missing"'\''
++ echo Successful
++ echo '\''message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T19:06:15Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T19:06:15Z"}],"name":"valid-pod","namespace":"namespace-1552676774-20994","resourceVersion":"712","selfLink":"/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod","uid":"67ef26f6-4755-11e9-93be-5aefd4b81fff"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-03-15T19:06:15Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-03-15T19:06:15Z]] name:valid-pod namespace:namespace-1552676774-20994 resourceVersion:712 selfLink:/api/v1/namespaces/namespace-1552676774-20994/pods/valid-pod uid:67ef26f6-4755-11e9-93be-5aefd4b81fff] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]'\''
... skipping 551 lines ...
status:
  phase: Pending
  qosClass: Guaranteed'\''
++ echo '\''has:name: valid-pod'\''
++ return 0
+++ kubectl get pods/invalid-pod -w --request-timeout=1 -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (NotFound): pods "invalid-pod" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): pods "invalid-pod" not found'\'' '\''"invalid-pod" not found'\''
++ local '\''message=Error from server (NotFound): pods "invalid-pod" not found'\''
++ local '\''match="invalid-pod" not found'\''
++ grep -q '\''"invalid-pod" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): pods "invalid-pod" not found'\''
++ echo '\''has:"invalid-pod" not found'\''
++ return 0
++ kubectl delete pods valid-pod -s http://127.0.0.1:8080 --match-server-version
++ kube::test::get_object_assert pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ kube::test::object_assert 1 pods '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
++ local tries=1
... skipping 3663 lines ...
+ tee -a /var/tmp/ju23812.txt
+ eval run_create_secret_tests
++ run_create_secret_tests
++ set -o nounset
++ set -o errexit
+++ kubectl get secrets mysecret -s http://127.0.0.1:8080 --match-server-version
++ output_message='Error from server (NotFound): secrets "mysecret" not found'
++ kube::test::if_has_string 'Error from server (NotFound): secrets "mysecret" not found' 'secrets "mysecret" not found'
++ local 'message=Error from server (NotFound): secrets "mysecret" not found'
++ local 'match=secrets "mysecret" not found'
++ grep -q 'secrets "mysecret" not found'
++ echo Successful
++ echo 'message:Error from server (NotFound): secrets "mysecret" not found'
++ echo 'has:secrets "mysecret" not found'
Successful
++ return 0
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
+++ kubectl create -s http://127.0.0.1:8080 --match-server-version secret generic mysecret --dry-run --from-literal=foo=bar -o 'jsonpath={.metadata.namespace}' --namespace=user-specified
++ output_message=user-specified
+++ kubectl get secrets mysecret -s http://127.0.0.1:8080 --match-server-version
I0315 19:06:33.112067   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:33.112306   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
++ failure_message='Error from server (NotFound): secrets "mysecret" not found'
++ kube::test::if_has_string 'Error from server (NotFound): secrets "mysecret" not found' 'secrets "mysecret" not found'
++ local 'message=Error from server (NotFound): secrets "mysecret" not found'
++ local 'match=secrets "mysecret" not found'
++ grep -q 'secrets "mysecret" not found'
++ echo Successful
++ echo 'message:Error from server (NotFound): secrets "mysecret" not found'
++ echo 'has:secrets "mysecret" not found'
Successful
message:Error from server (NotFound): secrets "mysecret" not found
++ return 0
has:secrets "mysecret" not found
++ kube::test::if_has_string user-specified user-specified
++ local message=user-specified
++ local match=user-specified
++ grep -q user-specified
... skipping 35 lines ...
++ sed -e 's/^\([^+]\)/| \1/g'
+ out='
+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
| Successful
| message:Error from server (NotFound): secrets "mysecret" not found
| has:secrets "mysecret" not found
| Successful
| message:Error from server (NotFound): secrets "mysecret" not found
| has:secrets "mysecret" not found
| Successful
| message:user-specified
| has:user-specified
| Successful
| {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"72bf77aa-4755-11e9-93be-5aefd4b81fff","resourceVersion":"823","creationTimestamp":"2019-03-15T19:06:33Z"}}
... skipping 7 lines ...
+ tee -a /var/tmp/ju23812.txt
+ eval run_create_secret_tests
++ run_create_secret_tests
++ set -o nounset
++ set -o errexit
+++ kubectl get secrets mysecret -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (NotFound): secrets "mysecret" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): secrets "mysecret" not found'\'' '\''secrets "mysecret" not found'\''
++ local '\''message=Error from server (NotFound): secrets "mysecret" not found'\''
++ local '\''match=secrets "mysecret" not found'\''
++ grep -q '\''secrets "mysecret" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): secrets "mysecret" not found'\''
++ echo '\''has:secrets "mysecret" not found'\''
++ return 0
+++ kubectl create -s http://127.0.0.1:8080 --match-server-version secret generic mysecret --dry-run --from-literal=foo=bar -o '\''jsonpath={.metadata.namespace}'\'' --namespace=user-specified
++ output_message=user-specified
+++ kubectl get secrets mysecret -s http://127.0.0.1:8080 --match-server-version
++ failure_message='\''Error from server (NotFound): secrets "mysecret" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): secrets "mysecret" not found'\'' '\''secrets "mysecret" not found'\''
++ local '\''message=Error from server (NotFound): secrets "mysecret" not found'\''
++ local '\''match=secrets "mysecret" not found'\''
++ grep -q '\''secrets "mysecret" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): secrets "mysecret" not found'\''
++ echo '\''has:secrets "mysecret" not found'\''
++ return 0
++ kube::test::if_has_string user-specified user-specified
++ local message=user-specified
++ local match=user-specified
++ grep -q user-specified
... skipping 33 lines ...
+ tee -a /var/tmp/ju23812.txt
+ eval run_create_secret_tests
++ run_create_secret_tests
++ set -o nounset
++ set -o errexit
+++ kubectl get secrets mysecret -s http://127.0.0.1:8080 --match-server-version
++ output_message='\''Error from server (NotFound): secrets "mysecret" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): secrets "mysecret" not found'\'' '\''secrets "mysecret" not found'\''
++ local '\''message=Error from server (NotFound): secrets "mysecret" not found'\''
++ local '\''match=secrets "mysecret" not found'\''
++ grep -q '\''secrets "mysecret" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): secrets "mysecret" not found'\''
++ echo '\''has:secrets "mysecret" not found'\''
++ return 0
+++ kubectl create -s http://127.0.0.1:8080 --match-server-version secret generic mysecret --dry-run --from-literal=foo=bar -o '\''jsonpath={.metadata.namespace}'\'' --namespace=user-specified
++ output_message=user-specified
+++ kubectl get secrets mysecret -s http://127.0.0.1:8080 --match-server-version
++ failure_message='\''Error from server (NotFound): secrets "mysecret" not found'\''
++ kube::test::if_has_string '\''Error from server (NotFound): secrets "mysecret" not found'\'' '\''secrets "mysecret" not found'\''
++ local '\''message=Error from server (NotFound): secrets "mysecret" not found'\''
++ local '\''match=secrets "mysecret" not found'\''
++ grep -q '\''secrets "mysecret" not found'\''
++ echo Successful
++ echo '\''message:Error from server (NotFound): secrets "mysecret" not found'\''
++ echo '\''has:secrets "mysecret" not found'\''
++ return 0
++ kube::test::if_has_string user-specified user-specified
++ local message=user-specified
++ local match=user-specified
++ grep -q user-specified
... skipping 1319 lines ...
++ echo has:valid-pod
++ return 0
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
+++ kubectl get pod valid-pod --request-timeout=1p
++ output_message='error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'
++ kube::test::if_has_string 'error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)' 'Invalid timeout value'
++ local 'message=error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'
++ local 'match=Invalid timeout value'
++ grep -q 'Invalid timeout value'
++ echo Successful
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
++ echo 'message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'
++ echo 'has:Invalid timeout value'
++ return 0
++ kubectl delete pods valid-pod -s http://127.0.0.1:8080 --match-server-version
pod "valid-pod" deleted
++ set +o nounset
++ set +o errexit
... skipping 138 lines ...
| has:Timeout exceeded while reading body
| Successful
| message:NAME        READY   STATUS    RESTARTS   AGE
| valid-pod   0/1     Pending   0          2s
| has:valid-pod
| Successful
| message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
| has:Invalid timeout value
| pod "valid-pod" deleted
+++ exit code: 0'
+ '[' 0 = 0 -a -n '' ']'
+ [[ 0 != 0 ]]
+ rm -f /var/tmp/ju23812.txt
... skipping 116 lines ...
++ echo Successful
++ echo '\''message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s'\''
++ echo has:valid-pod
++ return 0
+++ kubectl get pod valid-pod --request-timeout=1p
++ output_message='\''error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'\''
++ kube::test::if_has_string '\''error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'\'' '\''Invalid timeout value'\''
++ local '\''message=error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'\''
++ local '\''match=Invalid timeout value'\''
++ grep -q '\''Invalid timeout value'\''
++ echo Successful
++ echo '\''message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'\''
++ echo '\''has:Invalid timeout value'\''
++ return 0
++ kubectl delete pods valid-pod -s http://127.0.0.1:8080 --match-server-version
++ set +o nounset
++ set +o errexit
+ echo 0
... skipping 130 lines ...
++ echo Successful
++ echo '\''message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s'\''
++ echo has:valid-pod
++ return 0
+++ kubectl get pod valid-pod --request-timeout=1p
++ output_message='\''error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'\''
++ kube::test::if_has_string '\''error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'\'' '\''Invalid timeout value'\''
++ local '\''message=error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'\''
++ local '\''match=Invalid timeout value'\''
++ grep -q '\''Invalid timeout value'\''
++ echo Successful
++ echo '\''message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)'\''
++ echo '\''has:Invalid timeout value'\''
++ return 0
++ kubectl delete pods valid-pod -s http://127.0.0.1:8080 --match-server-version
++ set +o nounset
++ set +o errexit
+ echo 0
... skipping 255 lines ...
++ echo '+++ [0315 19:06:42] Testing kubectl non-native resources'
++ shift
+++ [0315 19:06:42] Testing kubectl non-native resources
++ kube::util::non_native_resources
++ local times
++ local wait
++ local failed
++ times=30
++ wait=10
++ local i
+++ seq 1 30
++ for i in $(seq 1 $times)
++ failed=
++ kubectl -s http://127.0.0.1:8080 --match-server-version get --raw /apis/company.com/v1
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"company.com/v1","resources":[{"name":"foos","singularName":"foo","namespaced":true,"kind":"Foo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"storageVersionHash":"xIRtouR4Ix8="},{"name":"bars","singularName":"bar","namespaced":true,"kind":"Bar","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"storageVersionHash":"5GMNuFRm/lM="},{"name":"validfoos","singularName":"validfoo","namespaced":true,"kind":"ValidFoo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"storageVersionHash":"mHoViSBo05k="}]}
++ kubectl -s http://127.0.0.1:8080 --match-server-version get --raw /apis/company.com/v1/foos
I0315 19:06:42.935589   47212 client.go:352] parsed scheme: ""
I0315 19:06:42.935663   47212 client.go:352] scheme "" not registered, fallback to default scheme
I0315 19:06:42.935698   47212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
... skipping 126 lines ...
has:kind.mygroup.example.com/myobj
+++ kubectl -s http://127.0.0.1:8080 --match-server-version get kind.mygroup.example.com/myobj -o name
I0315 19:06:48.120125   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:48.120351   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0315 19:06:49.120586   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:49.120830   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
E0315 19:06:49.730676   50145 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos"]
I0315 19:06:50.121118   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:06:50.121395   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
++ output_message=kind.mygroup.example.com/myobj
++ kube::test::if_has_string kind.mygroup.example.com/myobj kind.mygroup.example.com/myobj
++ local message=kind.mygroup.example.com/myobj
++ local match=kind.mygroup.example.com/myobj
... skipping 445 lines ...
++ echo 'crd.sh:241: Successful get foos/test {{.patched}}: <no value>'
++ echo -n '(B'
crd.sh:241: Successful get foos/test {{.patched}}: <no value>
(B++ return 0
++ CRD_RESOURCE_FILE=/tmp/tmp.wH96EXXO7X/crd-foos-test.json
++ kubectl -s http://127.0.0.1:8080 --match-server-version get foos/test -o json
++ CRD_PATCH_ERROR_FILE=/tmp/tmp.wH96EXXO7X/crd-foos-test-error
++ kubectl -s http://127.0.0.1:8080 --match-server-version patch --local -f /tmp/tmp.wH96EXXO7X/crd-foos-test.json -p '{"patched":"value3"}'
++ grep -q 'try --type merge' /tmp/tmp.wH96EXXO7X/crd-foos-test-error
+++ cat /tmp/tmp.wH96EXXO7X/crd-foos-test-error
++ kube::log::status '"kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge'
++ local V=0
++ [[ 1 < 0 ]]
+++ date '+[%m%d %H:%M:%S]'
++ timestamp='[0315 19:06:57]'
++ echo '+++ [0315 19:06:57] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge'
++ shift
+++ [0315 19:06:57] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
++ kubectl -s http://127.0.0.1:8080 --match-server-version patch --local -f /tmp/tmp.wH96EXXO7X/crd-foos-test.json -p '{"patched":"value3"}' --type=merge -o json
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
... skipping 137 lines ...
+++ echo crd.sh:258
++ echo 'crd.sh:258: Successful get foos/test {{.patched}}: value3'
crd.sh:258: Successful get foos/test {{.patched}}: value3
++ echo -n '(B'
++ return 0
(B++ rm /tmp/tmp.wH96EXXO7X/crd-foos-test.json
++ rm /tmp/tmp.wH96EXXO7X/crd-foos-test-error
++ kube::log::status 'Testing CustomResource labeling'
++ local V=0
++ [[ 1 < 0 ]]
+++ date '+[%m%d %H:%M:%S]'
++ timestamp='[0315 19:06:58]'
++ echo '+++ [0315 19:06:58] Testing CustomResource labeling'
... skipping 1297 lines ...
++ echo 'crd.sh:459: Successful get bars {{len .items}}: 0'
++ echo -n '(B'
++ return 0
crd.sh:459: Successful get bars {{len .items}}: 0
(B++ local tries=0
++ kubectl -s http://127.0.0.1:8080 --match-server-version get namespace non-native-resources
Error from server (NotFound): namespaces "non-native-resources" not found
++ set +o nounset
++ set +o errexit
++ kubectl delete customresourcedefinitions/foos.company.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
++ kubectl delete customresourcedefinitions/bars.company.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
++ kubectl delete customresourcedefinitions/resources.mygroup.example.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
E0315 19:07:20.032467   50145 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos"]
++ kubectl delete customresourcedefinitions/validfoos.company.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
I0315 19:07:20.139828   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:07:20.139975   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
++ set +o nounset
++ set +o errexit
... skipping 236 lines ...
| foo.company.com/test patched
| crd.sh:237: Successful get foos/test {{.patched}}: value1
| (Bfoo.company.com/test patched
| crd.sh:239: Successful get foos/test {{.patched}}: value2
| (Bfoo.company.com/test patched
| crd.sh:241: Successful get foos/test {{.patched}}: <no value>
| (B+++ [0315 19:06:57] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
| {
|     "apiVersion": "company.com/v1",
|     "kind": "Foo",
|     "metadata": {
|         "annotations": {
|             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 455 lines ...
++ timestamp='\''[0315 19:06:42]'\''
++ echo '\''+++ [0315 19:06:42] Testing kubectl non-native resources'\''
++ shift
++ kube::util::non_native_resources
++ local times
++ local wait
++ local failed
++ times=30
++ wait=10
++ local i
+++ seq 1 30
++ for i in $(seq 1 $times)
++ failed=
++ kubectl -s http://127.0.0.1:8080 --match-server-version get --raw /apis/company.com/v1
++ kubectl -s http://127.0.0.1:8080 --match-server-version get --raw /apis/company.com/v1/foos
++ kubectl -s http://127.0.0.1:8080 --match-server-version get --raw /apis/company.com/v1/bars
++ '\''['\'' -z '\'''\'' '\'']'\''
++ return 0
++ kube::test::get_object_assert foos '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
... skipping 322 lines ...
+++ echo crd.sh:241
++ echo '\''crd.sh:241: Successful get foos/test {{.patched}}: <no value>'\''
++ echo -n '\''(B'\''
++ return 0
++ CRD_RESOURCE_FILE=/tmp/tmp.wH96EXXO7X/crd-foos-test.json
++ kubectl -s http://127.0.0.1:8080 --match-server-version get foos/test -o json
++ CRD_PATCH_ERROR_FILE=/tmp/tmp.wH96EXXO7X/crd-foos-test-error
++ kubectl -s http://127.0.0.1:8080 --match-server-version patch --local -f /tmp/tmp.wH96EXXO7X/crd-foos-test.json -p '\''{"patched":"value3"}'\''
++ grep -q '\''try --type merge'\'' /tmp/tmp.wH96EXXO7X/crd-foos-test-error
+++ cat /tmp/tmp.wH96EXXO7X/crd-foos-test-error
++ kube::log::status '\''"kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:06:57]'\''
++ echo '\''+++ [0315 19:06:57] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge'\''
++ shift
++ kubectl -s http://127.0.0.1:8080 --match-server-version patch --local -f /tmp/tmp.wH96EXXO7X/crd-foos-test.json -p '\''{"patched":"value3"}'\'' --type=merge -o json
++ kubectl -s http://127.0.0.1:8080 --match-server-version patch --record -f /tmp/tmp.wH96EXXO7X/crd-foos-test.json -p '\''{"patched":"value3"}'\'' --type=merge -o json
++ kube::test::get_object_assert foos/test '\''{{.patched}}'\'' value3
++ kube::test::object_assert 1 foos/test '\''{{.patched}}'\'' value3
++ local tries=1
... skipping 15 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh
+++ echo crd.sh:258
++ echo '\''crd.sh:258: Successful get foos/test {{.patched}}: value3'\''
++ echo -n '\''(B'\''
++ return 0
++ rm /tmp/tmp.wH96EXXO7X/crd-foos-test.json
++ rm /tmp/tmp.wH96EXXO7X/crd-foos-test-error
++ kube::log::status '\''Testing CustomResource labeling'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:06:58]'\''
++ echo '\''+++ [0315 19:06:58] Testing CustomResource labeling'\''
... skipping 1065 lines ...
+++ echo crd.sh:459
++ echo '\''crd.sh:459: Successful get bars {{len .items}}: 0'\''
++ echo -n '\''(B'\''
++ return 0
++ local tries=0
++ kubectl -s http://127.0.0.1:8080 --match-server-version get namespace non-native-resources
Error from server (NotFound): namespaces "non-native-resources" not found
++ set +o nounset
++ set +o errexit
++ kubectl delete customresourcedefinitions/foos.company.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
++ kubectl delete customresourcedefinitions/bars.company.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
++ kubectl delete customresourcedefinitions/resources.mygroup.example.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
++ kubectl delete customresourcedefinitions/validfoos.company.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
... skipping 160 lines ...
++ timestamp='\''[0315 19:06:42]'\''
++ echo '\''+++ [0315 19:06:42] Testing kubectl non-native resources'\''
++ shift
++ kube::util::non_native_resources
++ local times
++ local wait
++ local failed
++ times=30
++ wait=10
++ local i
+++ seq 1 30
++ for i in $(seq 1 $times)
++ failed=
++ kubectl -s http://127.0.0.1:8080 --match-server-version get --raw /apis/company.com/v1
++ kubectl -s http://127.0.0.1:8080 --match-server-version get --raw /apis/company.com/v1/foos
++ kubectl -s http://127.0.0.1:8080 --match-server-version get --raw /apis/company.com/v1/bars
++ '\''['\'' -z '\'''\'' '\'']'\''
++ return 0
++ kube::test::get_object_assert foos '\''{{range.items}}{{.metadata.name}}:{{end}}'\'' '\'''\''
... skipping 322 lines ...
+++ echo crd.sh:241
++ echo '\''crd.sh:241: Successful get foos/test {{.patched}}: <no value>'\''
++ echo -n '\''(B'\''
++ return 0
++ CRD_RESOURCE_FILE=/tmp/tmp.wH96EXXO7X/crd-foos-test.json
++ kubectl -s http://127.0.0.1:8080 --match-server-version get foos/test -o json
++ CRD_PATCH_ERROR_FILE=/tmp/tmp.wH96EXXO7X/crd-foos-test-error
++ kubectl -s http://127.0.0.1:8080 --match-server-version patch --local -f /tmp/tmp.wH96EXXO7X/crd-foos-test.json -p '\''{"patched":"value3"}'\''
++ grep -q '\''try --type merge'\'' /tmp/tmp.wH96EXXO7X/crd-foos-test-error
+++ cat /tmp/tmp.wH96EXXO7X/crd-foos-test-error
++ kube::log::status '\''"kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:06:57]'\''
++ echo '\''+++ [0315 19:06:57] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge'\''
++ shift
++ kubectl -s http://127.0.0.1:8080 --match-server-version patch --local -f /tmp/tmp.wH96EXXO7X/crd-foos-test.json -p '\''{"patched":"value3"}'\'' --type=merge -o json
++ kubectl -s http://127.0.0.1:8080 --match-server-version patch --record -f /tmp/tmp.wH96EXXO7X/crd-foos-test.json -p '\''{"patched":"value3"}'\'' --type=merge -o json
++ kube::test::get_object_assert foos/test '\''{{.patched}}'\'' value3
++ kube::test::object_assert 1 foos/test '\''{{.patched}}'\'' value3
++ local tries=1
... skipping 15 lines ...
++++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh
+++ echo crd.sh:258
++ echo '\''crd.sh:258: Successful get foos/test {{.patched}}: value3'\''
++ echo -n '\''(B'\''
++ return 0
++ rm /tmp/tmp.wH96EXXO7X/crd-foos-test.json
++ rm /tmp/tmp.wH96EXXO7X/crd-foos-test-error
++ kube::log::status '\''Testing CustomResource labeling'\''
++ local V=0
++ [[ 1 < 0 ]]
+++ date '\''+[%m%d %H:%M:%S]'\''
++ timestamp='\''[0315 19:06:58]'\''
++ echo '\''+++ [0315 19:06:58] Testing CustomResource labeling'\''
... skipping 1065 lines ...
+++ echo crd.sh:459
++ echo '\''crd.sh:459: Successful get bars {{len .items}}: 0'\''
++ echo -n '\''(B'\''
++ return 0
++ local tries=0
++ kubectl -s http://127.0.0.1:8080 --match-server-version get namespace non-native-resources
Error from server (NotFound): namespaces "non-native-resources" not found
++ set +o nounset
++ set +o errexit
++ kubectl delete customresourcedefinitions/foos.company.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
++ kubectl delete customresourcedefinitions/bars.company.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
++ kubectl delete customresourcedefinitions/resources.mygroup.example.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
++ kubectl delete customresourcedefinitions/validfoos.company.com -s https://127.0.0.1:6443 --token=admin-token --insecure-skip-tls-verify=true --match-server-version
... skipping 139 lines ...
++ kubectl delete deployments test1
I0315 19:07:21.088536   50145 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552676840-9245", Name:"test1-848d5d4b47", UID:"8f13eee7-4755-11e9-93be-5aefd4b81fff", APIVersion:"apps/v1", ResourceVersion:"987", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-psfbr
I0315 19:07:21.140311   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:07:21.140523   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
deployment.extensions "test1" deleted
+++ kubectl run test2 --image=InvalidImageName
++ output_message='error: Invalid image name "InvalidImageName": invalid reference format'
++ kube::test::if_has_string 'error: Invalid image name "InvalidImageName": invalid reference format' 'error: Invalid image name "InvalidImageName": invalid reference format'
++ local 'message=error: Invalid image name "InvalidImageName": invalid reference format'
++ local 'match=error: Invalid image name "InvalidImageName": invalid reference format'
++ grep -q 'error: Invalid image name "InvalidImageName": invalid reference format'
++ echo Successful
++ echo 'message:error: Invalid image name "InvalidImageName": invalid reference format'
++ echo 'has:error: Invalid image name "InvalidImageName": invalid reference format'
++ return 0
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\n'
++ cat /tmp/evErr.23812.log
+ evErr=0
... skipping 17 lines ...
+++ [0315 19:07:20] Testing cmd with image
| Successful
| message:deployment.apps/test1 created
| has:deployment.apps/test1 created
| deployment.extensions "test1" deleted
| Successful
| message:error: Invalid image name "InvalidImageName": invalid reference format
| has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0'
+ '[' 0 = 0 -a -n '' ']'
+ [[ 0 != 0 ]]
+ rm -f /var/tmp/ju23812.txt
++ cat /var/tmp/ju23812-err.txt
+ errMsg='+ eVal run_cmd_with_img_tests
... skipping 32 lines ...
++ echo Successful
++ echo '\''message:deployment.apps/test1 created'\''
++ echo '\''has:deployment.apps/test1 created'\''
++ return 0
++ kubectl delete deployments test1
+++ kubectl run test2 --image=InvalidImageName
++ output_message='\''error: Invalid image name "InvalidImageName": invalid reference format'\''
++ kube::test::if_has_string '\''error: Invalid image name "InvalidImageName": invalid reference format'\'' '\''error: Invalid image name "InvalidImageName": invalid reference format'\''
++ local '\''message=error: Invalid image name "InvalidImageName": invalid reference format'\''
++ local '\''match=error: Invalid image name "InvalidImageName": invalid reference format'\''
++ grep -q '\''error: Invalid image name "InvalidImageName": invalid reference format'\''
++ echo Successful
++ echo '\''message:error: Invalid image name "InvalidImageName": invalid reference format'\''
++ echo '\''has:error: Invalid image name "InvalidImageName": invalid reference format'\''
++ return 0
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\''\n'\'''
+ rm -f /var/tmp/ju23812-err.txt
... skipping 46 lines ...
++ echo Successful
++ echo '\''message:deployment.apps/test1 created'\''
++ echo '\''has:deployment.apps/test1 created'\''
++ return 0
++ kubectl delete deployments test1
+++ kubectl run test2 --image=InvalidImageName
++ output_message='\''error: Invalid image name "InvalidImageName": invalid reference format'\''
++ kube::test::if_has_string '\''error: Invalid image name "InvalidImageName": invalid reference format'\'' '\''error: Invalid image name "InvalidImageName": invalid reference format'\''
++ local '\''message=error: Invalid image name "InvalidImageName": invalid reference format'\''
++ local '\''match=error: Invalid image name "InvalidImageName": invalid reference format'\''
++ grep -q '\''error: Invalid image name "InvalidImageName": invalid reference format'\''
++ echo Successful
++ echo '\''message:error: Invalid image name "InvalidImageName": invalid reference format'\''
++ echo '\''has:error: Invalid image name "InvalidImageName": invalid reference format'\''
++ return 0
++ set +o nounset
++ set +o errexit
+ echo 0
+ tr -d '\''\n'\'']]></system-err>
    </testcase>
... skipping 70 lines ...
(B+ return 0
++ kubectl create -f hack/testdata/recursive/pod --recursive -s http://127.0.0.1:8080 --match-server-version
I0315 19:07:22.140801   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:07:22.140958   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
+ output_message='pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false'
+ kube::test::get_object_assert pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
+ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
+ local tries=1
+ local object=pods
+ local 'request={{range.items}}{{.metadata.name}}:{{end}}'
+ local expected=busybox0:busybox1:
... skipping 15 lines ...
+ echo 'generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:'
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
+ echo -n '(B'
(B+ return 0
+ kube::test::if_has_string 'pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false' 'error validating data: kind not set'
+ local 'message=pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false'
+ local 'match=error validating data: kind not set'
+ grep -q 'error validating data: kind not set'
+ echo Successful
Successful
+ echo 'message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false'
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
+ echo 'has:error validating data: kind not set'
has:error validating data: kind not set
+ return 0
+ kube::test::get_object_assert pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
+ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
+ local tries=1
+ local object=pods
+ local 'request={{range.items}}{{.metadata.name}}:{{end}}'
... skipping 18 lines ...
+ echo -n '(B'
(B+ return 0
+ echo -e '#!/usr/bin/env bash\nsed -i "s/image: busybox/image: prom\/busybox/g" $1'
+ chmod +x /tmp/tmp-editor.sh
++ EDITOR=/tmp/tmp-editor.sh
++ kubectl edit -f hack/testdata/recursive/pod --recursive -s http://127.0.0.1:8080 --match-server-version
+ output_message='error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object '\''Kind'\'' is missing in '\''{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'\'''
+ kube::test::get_object_assert pods '{{range.items}}{{(index .spec.containers 0).image}}:{{end}}' busybox:busybox:
+ kube::test::object_assert 1 pods '{{range.items}}{{(index .spec.containers 0).image}}:{{end}}' busybox:busybox:
+ local tries=1
+ local object=pods
+ local 'request={{range.items}}{{(index .spec.containers 0).image}}:{{end}}'
+ local expected=busybox:busybox:
... skipping 12 lines ...
+++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/generic-resources.sh
++ echo generic-resources.sh:219
+ echo 'generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:'
generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
+ echo -n '(B'
(B+ return 0
+ kube::test::if_has_string 'error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object '\''Kind'\'' is missing in '\''{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'\''' 'Object '\''Kind'\'' is missing'
+ local 'message=error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object '\''Kind'\'' is missing in '\''{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'\'''
+ local 'match=Object '\''Kind'\'' is missing'
+ grep -q 'Object '\''Kind'\'' is missing'
+ echo Successful
Successful
+ echo 'message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object '\''Kind'\'' is missing in '\''{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'\'''
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
+ echo 'has:Object '\''Kind'\'' is missing'
has:Object 'Kind' is missing
+ return 0
+ rm /tmp/tmp-editor.sh
+ kube::test::get_object_assert pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
+ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
... skipping 21 lines ...
(B+ return 0
++ kubectl replace -f hack/testdata/recursive/pod-modify --recursive -s http://127.0.0.1:8080 --match-server-version
I0315 19:07:23.141427   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 19:07:23.141627   47212 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
+ output_message='pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false'
+ kube::test::get_object_assert pods '{{range.items}}{{.metadata.labels.status}}:{{end}}' replaced:replaced:
+ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.labels.status}}:{{end}}' replaced:replaced:
+ local tries=1
+ local object=pods
+ local 'request={{range.items}}{{.metadata.labels.status}}:{{end}}'
+ local expected=replaced:replaced:
... skipping 14 lines ...
+ echo 'generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:'
generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
+ echo -n '(B'
(B+ return 0
+ kube::test::if_has_string 'pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false' 'error validating data: kind not set'
+ local 'message=pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false'
+ local 'match=error validating data: kind not set'
+ grep -q 'error validating data: kind not set'
+ echo Successful
Successful
+ echo 'message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false'
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
+ echo 'has:error validating data: kind not set'
has:error validating data: kind not set
+ return 0
+ kube::test::get_object_assert pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
+ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
+ local tries=1
+ local object=pods
+ local 'request={{range.items}}{{.metadata.name}}:{{end}}'
... skipping 752 lines ...
+ echo -n '(B'
(B+ return 0
++ kubectl annotate -f hack/testdata/recursive/pod annotatekey=annotatevalue --recursive -s http://127.0.0.1:8080 --match-server-version
I0315 19:07:24.030826   50145 namespace_controller.go:171] Namespace has been deleted non-native-resources
+ output_message='pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object '\''Kind'\'' is missing in '\''{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'\'''
+ kube::test::get_object_assert pods '{{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}' annotatevalue:annotatevalue:
+ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}' annotatevalue:annotatevalue:
+ local tries=1
+ local object=pods
+ local 'request={{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}'
+ local expected=annotatevalue:annotatevalue:
... skipping 16 lines ...
+ echo 'generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:'
generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
+ echo -n '(B'
(B+ return 0
+ kube::test::if_has_string 'pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object '\''Kind'\'' is missing in '\''{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'\''' 'Object '\''Kind'\'' is missing'
+ local 'message=pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object '\''Kind'\'' is missing in '\''{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'\'''
+ local 'match=Object '\''Kind'\'' is missing'
+ grep -q 'Object '\''Kind'\'' is missing'
+ echo Successful
Successful
+ echo 'message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object '\''Kind'\'' is missing in '\''{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'\'''
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
+ echo 'has:Object '\''Kind'\'' is missing'
has:Object 'Kind' is missing
+ return 0
+ kube::test::get_object_assert pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
+ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.name}}:{{end}}' busybox0:busybox1:
+ local tries=1
... skipping 20 lines ...
(B+ return 0
++ kubectl apply -f hack/testdata/recursive/pod-modify --recursive -s http://127.0.0.1:8080 --match-server-version
+ output_message='Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false'
+ kube::test::get_object_assert pods '{{range.items}}{{.metadata.labels.status}}:{{end}}' replaced:replaced:
+ kube::test::object_assert 1 pods '{{range.items}}{{.metadata.labels.status}}:{{end}}' replaced:replaced:
+ local tries=1
+ local object=pods
+ local 'request={{range.items}}{{.metadata.labels.status}}:{{end}}'
+ local expected=replaced:replaced:
... skipping 16 lines ...
+ echo -n '(B'
(B+ return 0
+ kube::test::if_has_string 'Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false' 'error validating data: kind not set'
+ local 'message=Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false'
+ local 'match=error validating data: kind not set'
+ grep -q 'error validating data: kind not set'
+ echo Successful
Successful
+ echo 'message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false'
message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
+ echo 'has:error validating data: kind not set'
has:error validating data: kind not set
+ return 0
+ kube::test::get_object_assert deployment '{{range.items}}{{.metadata.name}}:{{end}}' ''
+ kube::test::object_assert 1 deployment '{{range.items}}{{.metadata.name}}:{{end}}' ''
+ local tries=1
+ local object=deployment
+ local 'request={{range.items}}{{.metadata.name}}:{{end}}'
... skipping 336 lines ...
++ kube::test::get_caller 3
++ local levels=3
++ local caller_file=/home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/generic-resources.sh
++ local caller_line=280
+++ basename /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/generic-resources.sh
++ echo generic-resources.sh:280
+ echo 'generic-resources.sh:280: FAIL!'
generic-resources.sh:280: FAIL!
+ echo 'Get pods {{range.items}}{{.metadata.name}}:{{end}}'
Get pods {{range.items}}{{.metadata.name}}:{{end}}
+ echo '  Expected: busybox0:busybox1:'
  Expected: busybox0:busybox1:
+ echo '  Got:      busybox0:busybox1:nginx-5f7cff5b56-4mjb6:nginx-5f7cff5b56-nrbsd:nginx-5f7cff5b56-qlkgz:'
  Got:      busybox0:busybox1:nginx-5f7cff5b56-4mjb6:nginx-5f7cff5b56-nrbsd:nginx-5f7cff5b56-qlkgz:
... skipping 107 lines ...
I0315 19:07:26.653882   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.653903   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.653909   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.653988   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.654002   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.654004   47212 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0315 19:07:26.654156   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654181   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654235   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654248   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654295   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 19:07:26.654302   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654305   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 19:07:26.654326   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654369   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654389   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654405   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654443   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654449   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654502   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654510   47212 controller.go:87] Shutting down OpenAPI AggregationController
W0315 19:07:26.654522   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654549   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654578   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654586   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654597   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 19:07:26.654614   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654576   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654630   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.654693   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 19:07:26.654697   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654615   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 19:07:26.654639   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654643   47212 controller.go:176] Shutting down kubernetes service endpoint reconciler
W0315 19:07:26.654750   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654758   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654763   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654796   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654814   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654328   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 19:07:26.654832   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654845   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 19:07:26.654446   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654524   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654890   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654896   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654663   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654672   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654679   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654725   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654950   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654729   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654982   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.655001   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.654404   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.654664   47212 secure_serving.go:160] Stopped listening on 127.0.0.1:8080
W0315 19:07:26.654824   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.655095   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.655192   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.655236   47212 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0315 19:07:26.654503   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.655289   47212 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0315 19:07:26.655241   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.655303   47212 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0315 19:07:26.655014   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.655030   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.655503   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 19:07:26.655614   47212 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0315 19:07:26.655817   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.655834   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.655896   47212 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
I0315 19:07:26.656002   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.656018   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.656041   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 12 lines ...
I0315 19:07:26.656233   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.656236   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.656254   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.656327   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.656342   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.656414   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E0315 19:07:26.656431   47212 controller.go:179] rpc error: code = Unavailable desc = transport is closing
I0315 19:07:26.656443   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.656482   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 19:07:26.656511   47212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
junit report dir: /logs/artifacts
+++ [0315 19:07:26] Clean up complete
Makefile:298: recipe for target 'test-cmd' failed
make: *** [test-cmd] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2019/03/15 19:07:26 Cleaning up Docker data root...
[Barnacle] 2019/03/15 19:07:26 Removing all containers.
... skipping 21 lines ...