This job view page is being replaced by Spyglass soon. Check out the new job view.
PRpacoxu: add --system-reserved support for swap
ResultABORTED
Tests 0 failed / 134 succeeded
Started2022-05-04 12:41
Elapsed18m36s
Revision6fe67b8be8d3ba4a7a744a0b80cc3207ee40f8cd
Refs 105271

No Test Failures!


Show 134 Passed Tests

Error lines from build-log.txt

... skipping 75 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 163: bogus-expected-to-fail: command not found
!!! [0504 12:47:13] Call tree:
!!! [0504 12:47:13]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0504 12:47:13]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0504 12:47:13]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:139 juLog(...)
!!! [0504 12:47:13]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:167 record_command(...)
!!! [0504 12:47:13]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0504 12:47:13] Running kubeadm tests
+++ [0504 12:47:15] Building go targets for linux/amd64
    k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
+++ [0504 12:47:18] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kubeadm (static)
+++ [0504 12:48:06] Building go targets for linux/amd64
... skipping 197 lines ...
I0504 12:51:18.363787   53248 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0504 12:51:18.364034   53248 cache.go:39] Caches are synced for autoregister controller
I0504 12:51:18.364587   53248 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0504 12:51:18.365771   53248 apf_controller.go:322] Running API Priority and Fairness config worker
I0504 12:51:18.957161   53248 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0504 12:51:18.964685   53248 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
E0504 12:51:19.263278   53248 controller.go:113] loading OpenAPI spec for "" failed with: APIService  does not exist for update
I0504 12:51:19.263314   53248 controller.go:126] OpenAPI AggregationController: action for item : Rate Limited Requeue.
I0504 12:51:19.270972   53248 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0504 12:51:19.283231   53248 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0504 12:51:19.283257   53248 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0504 12:51:21.078617   53248 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0504 12:51:21.198165   53248 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
... skipping 7 lines ...
    k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
+++ [0504 12:51:27] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kube-controller-manager (static)
+++ [0504 12:51:59] Generate kubeconfig for controller-manager
+++ [0504 12:51:59] Starting controller-manager
I0504 12:52:00.357358   56800 serving.go:348] Generated self-signed cert in-memory
W0504 12:52:00.656290   56800 authentication.go:423] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0504 12:52:00.656335   56800 authentication.go:317] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0504 12:52:00.656347   56800 authentication.go:341] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0504 12:52:00.656367   56800 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0504 12:52:00.656384   56800 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0504 12:52:00.656421   56800 controllermanager.go:180] Version: v1.25.0-alpha.0.204+c188f8924fe828
I0504 12:52:00.656442   56800 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0504 12:52:00.658344   56800 secure_serving.go:210] Serving securely on [::]:10257
I0504 12:52:00.658692   56800 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0504 12:52:00.658689   56800 tlsconfig.go:240] "Starting DynamicServingCertificateController"
... skipping 84 lines ...
I0504 12:52:00.812032   56800 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
I0504 12:52:00.812062   56800 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key"
I0504 12:52:00.812281   56800 controllermanager.go:593] Started "csrapproving"
I0504 12:52:00.812434   56800 certificate_controller.go:119] Starting certificate controller "csrapproving"
I0504 12:52:00.812457   56800 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving
I0504 12:52:00.812486   56800 node_lifecycle_controller.go:77] Sending events to api server
E0504 12:52:00.812507   56800 core.go:211] failed to start cloud node lifecycle controller: no cloud provider provided
W0504 12:52:00.812520   56800 controllermanager.go:571] Skipping "cloud-node-lifecycle"
I0504 12:52:00.812799   56800 controllermanager.go:593] Started "persistentvolume-expander"
I0504 12:52:00.812824   56800 expand_controller.go:341] Starting expand controller
I0504 12:52:00.812835   56800 shared_informer.go:255] Waiting for caches to sync for expand
I0504 12:52:00.813004   56800 controllermanager.go:593] Started "pvc-protection"
I0504 12:52:00.813232   56800 pvc_protection_controller.go:103] "Starting PVC protection controller"
... skipping 75 lines ...
I0504 12:52:00.825977   56800 controllermanager.go:593] Started "disruption"
I0504 12:52:00.826209   56800 disruption.go:363] Starting disruption controller
I0504 12:52:00.826280   56800 shared_informer.go:255] Waiting for caches to sync for disruption
I0504 12:52:00.826310   56800 controllermanager.go:593] Started "cronjob"
I0504 12:52:00.826512   56800 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
I0504 12:52:00.826536   56800 shared_informer.go:255] Waiting for caches to sync for cronjob
E0504 12:52:00.826663   56800 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0504 12:52:00.826690   56800 controllermanager.go:571] Skipping "service"
I0504 12:52:00.843940   56800 shared_informer.go:255] Waiting for caches to sync for resource quota
W0504 12:52:00.849315   56800 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0504 12:52:00.849564   56800 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0504 12:52:00.849622   56800 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0504 12:52:00.849892   56800 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 43 lines ...
I0504 12:52:01.209921   56800 shared_informer.go:262] Caches are synced for attach detach
I0504 12:52:01.221190   56800 shared_informer.go:262] Caches are synced for resource quota
I0504 12:52:01.651987   56800 shared_informer.go:262] Caches are synced for garbage collector
I0504 12:52:01.725016   56800 shared_informer.go:262] Caches are synced for garbage collector
I0504 12:52:01.725059   56800 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
node/127.0.0.1 created
W0504 12:52:02.487392   56800 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [0504 12:52:02] Checking kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.204+c188f8924fe828", GitCommit:"c188f8924fe82870b79888fec308630b915526f6", GitTreeState:"clean", BuildDate:"2022-05-04T11:57:52Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.204+c188f8924fe828", GitCommit:"c188f8924fe82870b79888fec308630b915526f6", GitTreeState:"clean", BuildDate:"2022-05-04T11:57:52Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   42s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 196 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0504 12:52:08] Creating namespace namespace-1651668728-16919
namespace/namespace-1651668728-16919 created
Context "test" modified.
+++ [0504 12:52:08] Testing RESTMapper
+++ [0504 12:52:08] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 61 lines ...
namespace/namespace-1651668737-30032 created
Context "test" modified.
+++ [0504 12:52:17] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
(Bmessage:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 8 lines ...
(Brbac.sh:50: Successful get clusterrole/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(Bclusterrole.rbac.authorization.k8s.io/resource-reader created
rbac.sh:52: Successful get clusterrole/resource-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list:get:list:
(Brbac.sh:53: Successful get clusterrole/resource-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:deployments:
(Brbac.sh:54: Successful get clusterrole/resource-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :apps:
(Bclusterrole.rbac.authorization.k8s.io/resourcename-reader created
E0504 12:52:19.264418   53248 controller.go:113] loading OpenAPI spec for "" failed with: APIService  does not exist for update
I0504 12:52:19.264452   53248 controller.go:126] OpenAPI AggregationController: action for item : Rate Limited Requeue.
rbac.sh:56: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list:
(Brbac.sh:57: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:58: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(Brbac.sh:59: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.resourceNames}}{{.}}:{{end}}{{end}}: foo:
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1651668746-11513 created
Context "test" modified.
+++ [0504 12:52:26] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 439 lines ...
has:valid-pod
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I0504 12:52:41.803623   61612 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I0504 12:52:41.805543   61612 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3D7c2731a1-f0df-4ee8-a53e-795bc45567a8%2CinvolvedObject.name%3Dtest-pdb-2&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 242 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.7:
(BSuccessful
(Bmessage:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0504 12:52:59] "kubectl patch with resourceVersion 603" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
(Bmessage:kubectl-replace
has:kubectl-replace
Successful
(Bmessage:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
(Bmessage:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0504 12:53:01.046044   56800 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:655: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 29 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:3.7
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 84 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0504 12:53:11] Creating namespace namespace-1651668791-18155
namespace/namespace-1651668791-18155 created
Context "test" modified.
+++ [0504 12:53:12] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 63 lines ...
	If true, keep the managedFields when printing objects in JSON or YAML format.

    --template='':
	Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

    --validate='strict':
	Must be one of: strict (or true), warn, ignore (or false). 		"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. 		"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. 		"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.

    --windows-line-endings=false:
	Only relevant if --edit=true. Defaults to the line ending native to your platform.

Usage:
  kubectl create -f FILENAME [options]
... skipping 38 lines ...
I0504 12:53:15.415577   56800 event.go:294] "Event occurred" object="namespace-1651668792-32660/test-deployment-retainkeys-fcb4f8566" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-fcb4f8566-hn5n8"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0504 12:53:16.505979   65331 helpers.go:650] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
pod/test-pod created (dry run)
pod/test-pod created (dry run)
... skipping 29 lines ...
(Bpod/b created
apply.sh:208: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:209: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
(Bmessage:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
I0504 12:53:26.330322   56800 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1651668789-25431
pod/a created
pod/b created
I0504 12:53:26.419574   53248 alloc.go:327] "allocated clusterIPs" service="namespace-1651668792-32660/prune-svc" clusterIPs=map[IPv4:10.0.0.79]
service/prune-svc created
... skipping 37 lines ...
apply.sh:262: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
apply.sh:266: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
Successful
(Bmessage:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:277: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:281: Successful get services a {{.metadata.name}}: a
(BSuccessful
(Bmessage:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 28 lines ...
(Bapply.sh:303: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:304: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
(Bmessage:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:312: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
(Bmessage:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:320: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
I0504 12:53:56.205962   56800 namespace_controller.go:185] Namespace has been deleted nsb
apply.sh:326: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:configmap/foo created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:332: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:338: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
... skipping 6 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:350: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Widget" in version "example.com/v1"
Successful
(Bmessage:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0504 12:54:05.141455   56800 namespace_controller.go:185] Namespace has been deleted multi-resource-ns
I0504 12:54:06.757591   53248 controller.go:611] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
... skipping 32 lines ...
(Bmessage:885
has:885
pod "test-pod" deleted
apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0504 12:54:09] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 75 lines ...
(Bpod "nginx-extensions" deleted
Successful
(Bmessage:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
(Bmessage:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0504 12:54:15] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 18 lines ...
apps.sh:136: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1
(Bapps.sh:137: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1
(Bapps.sh:138: Successful get deployments my-depl {{.metadata.labels.l1}}: <no value>
(Bdeployment.apps "my-depl" deleted
replicaset.apps "my-depl-dc96cf9f7" deleted
pod "my-depl-dc96cf9f7-b64z9" deleted
E0504 12:54:17.515721   56800 replica_set.go:550] sync "namespace-1651668855-23654/my-depl-dc96cf9f7" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-dc96cf9f7": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1651668855-23654/my-depl-dc96cf9f7, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: fcc37c0b-4e51-42d5-84cc-57105a6f99c3, UID in object meta: 
apps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:145: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:146: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:150: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0504 12:54:18.114523   56800 event.go:294] "Event occurred" object="namespace-1651668855-23654/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-7bf7574b94 to 3"
I0504 12:54:18.125795   56800 event.go:294] "Event occurred" object="namespace-1651668855-23654/nginx-7bf7574b94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-7bf7574b94-rng9j"
I0504 12:54:18.136183   56800 event.go:294] "Event occurred" object="namespace-1651668855-23654/nginx-7bf7574b94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-7bf7574b94-8fdlv"
I0504 12:54:18.136215   56800 event.go:294] "Event occurred" object="namespace-1651668855-23654/nginx-7bf7574b94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-7bf7574b94-5j8l7"
apps.sh:154: Successful get deployment nginx {{.metadata.name}}: nginx
(BE0504 12:54:19.264803   53248 controller.go:113] loading OpenAPI spec for "" failed with: APIService  does not exist for update
I0504 12:54:19.264833   53248 controller.go:126] OpenAPI AggregationController: action for item : Rate Limited Requeue.
Successful
(Bmessage:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1651668855-23654\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1651668855-23654"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0504 12:54:26.705051   56800 event.go:294] "Event occurred" object="namespace-1651668855-23654/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-76dc678589 to 3"
I0504 12:54:26.741006   56800 event.go:294] "Event occurred" object="namespace-1651668855-23654/nginx-76dc678589" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-76dc678589-xkt68"
I0504 12:54:26.753535   56800 event.go:294] "Event occurred" object="namespace-1651668855-23654/nginx-76dc678589" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-76dc678589-hp5hb"
I0504 12:54:26.753985   56800 event.go:294] "Event occurred" object="namespace-1651668855-23654/nginx-76dc678589" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-76dc678589-m8gh2"
Successful
... skipping 495 lines ...
+++ [0504 12:54:39] Creating namespace namespace-1651668879-23886
namespace/namespace-1651668879-23886 created
Context "test" modified.
+++ [0504 12:54:39] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:{
    "apiVersion": "v1",
    "items": [],
... skipping 21 lines ...
has not:No resources found
Successful
(Bmessage:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1651668879-23886 namespace.
has:No resources found
Successful
(Bmessage:
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1651668879-23886 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
(Bmessage:Error from server (NotFound): pods "abc" not found
has not:List
Successful
(Bmessage:I0504 12:54:41.992726   68866 loader.go:372] Config loaded from file:  /tmp/tmp.o53TEDbtxd/.kube/config
I0504 12:54:41.998420   68866 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
I0504 12:54:42.037878   68866 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0504 12:54:42.039800   68866 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 596 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
(Bmessage:valid-pod:
has:valid-pod:
Successful
(Bmessage:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2022-05-04T12:54:49Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2022-05-04T12:54:49Z"}}, "name":"valid-pod", "namespace":"namespace-1651668889-21776", "resourceVersion":"1064", "uid":"75eea57b-becf-4518-b0cd-bf63b0d58516"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
(Bmessage:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2022-05-04T12:54:49Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2022-05-04T12:54:49Z"}],"name":"valid-pod","namespace":"namespace-1651668889-21776","resourceVersion":"1064","uid":"75eea57b-becf-4518-b0cd-bf63b0d58516"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2022-05-04T12:54:49Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2022-05-04T12:54:49Z]] name:valid-pod namespace:namespace-1651668889-21776 resourceVersion:1064 uid:75eea57b-becf-4518-b0cd-bf63b0d58516] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
(Bmessage:Error from server (NotFound): the server could not find the requested resource
has:the server could not find the requested resource
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:STATUS
Successful
... skipping 78 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
(Bmessage:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:204: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 37 lines ...
+++ [0504 12:54:55] Creating namespace namespace-1651668895-19286
namespace/namespace-1651668895-19286 created
Context "test" modified.
+++ [0504 12:54:55] Testing kubectl exec POD COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0504 12:54:56] Creating namespace namespace-1651668896-15102
namespace/namespace-1651668896-15102 created
Context "test" modified.
+++ [0504 12:54:56] Testing kubectl exec TYPE/NAME COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0504 12:54:57.615980   56800 event.go:294] "Event occurred" object="namespace-1651668896-15102/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-klc29"
I0504 12:54:57.626610   56800 event.go:294] "Event occurred" object="namespace-1651668896-15102/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-zjmgk"
I0504 12:54:57.626712   56800 event.go:294] "Event occurred" object="namespace-1651668896-15102/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-nqsv6"
configmap/test-set-env-config created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-klc29 does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-klc29 does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(Bmessage:user-specified
has:user-specified
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(B{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"3027d26f-e370-4629-b47d-2e6711863513","resourceVersion":"1143","creationTimestamp":"2022-05-04T12:54:58Z"}}
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"3027d26f-e370-4629-b47d-2e6711863513","resourceVersion":"1144","creationTimestamp":"2022-05-04T12:54:58Z"},"data":{"key1":"config1"}}
has:uid
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"3027d26f-e370-4629-b47d-2e6711863513","resourceVersion":"1144","creationTimestamp":"2022-05-04T12:54:58Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"3027d26f-e370-4629-b47d-2e6711863513"}}
Successful
(Bmessage:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 25 lines ...
+++ command: run_kubectl_create_validate_tests
+++ [0504 12:55:00] Creating namespace namespace-1651668900-20246
namespace/namespace-1651668900-20246 created
Context "test" modified.
+++ [0504 12:55:00] Testing kubectl create --validate=true
Successful
message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
has either:strict decoding error
or:error validating data
+++ [0504 12:55:00] Testing kubectl create --validate=false
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
has:deployment.apps/invalid-nginx-deployment created
I0504 12:55:00.685495   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-6595874d85 to 4"
I0504 12:55:00.734276   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-6595874d85-vjf99"
I0504 12:55:00.747237   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-6595874d85-h869n"
I0504 12:55:00.747281   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-6595874d85-m5fch"
deployment.apps "invalid-nginx-deployment" deleted
I0504 12:55:00.759110   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-6595874d85-wv82f"
+++ [0504 12:55:00] Testing kubectl create --validate=strict
E0504 12:55:00.801151   56800 replica_set.go:550] sync "namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" failed with replicasets.apps "invalid-nginx-deployment-6595874d85" not found
Successful
message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
has either:strict decoding error
or:error validating data
+++ [0504 12:55:00] Testing kubectl create --validate=warn
W0504 12:55:01.131672   70267 schema.go:146] cannot perform warn validation if server-side field validation is unsupported, skipping validation
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
has:deployment.apps/invalid-nginx-deployment created
I0504 12:55:01.151218   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-6595874d85 to 4"
... skipping 10 lines ...
I0504 12:55:01.324454   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-6595874d85-8kpsw"
I0504 12:55:01.378477   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-6595874d85-5lwcj"
I0504 12:55:01.378534   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-6595874d85-8c264"
deployment.apps "invalid-nginx-deployment" deleted
I0504 12:55:01.388720   56800 event.go:294] "Event occurred" object="namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-6595874d85-grbfm"
+++ [0504 12:55:01] Testing kubectl create
E0504 12:55:01.424127   56800 replica_set.go:550] sync "namespace-1651668900-20246/invalid-nginx-deployment-6595874d85" failed with replicasets.apps "invalid-nginx-deployment-6595874d85" not found
Successful
message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
has either:strict decoding error
or:error validating data
+++ [0504 12:55:01] Testing kubectl create --validate=foo
Successful
(Bmessage:error: invalid - validate option "foo"; must be one of: strict (or true), warn, ignore (or false)
has:invalid - validate option "foo"
+++ exit code: 0
Recording: run_convert_tests
Running command: run_convert_tests

+++ Running case: test-cmd.run_convert_tests 
... skipping 50 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:apps/v1beta1
deployment.apps "nginx" deleted
Successful
(Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
Successful
(Bmessage:nginx:
has:nginx:
+++ exit code: 0
Recording: run_kubectl_delete_allnamespaces_tests
... skipping 103 lines ...
has:Timeout
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
(Bmessage:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 185 lines ...
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:282: Successful get foos/test {{.patched}}: value2
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:284: Successful get foos/test {{.patched}}: <no value>
(B+++ [0504 12:55:14] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 253 lines ...