This job view page is being replaced by Spyglass soon. Check out the new job view.
PRadrianreber: Minimal checkpointing support
ResultABORTED
Tests 0 failed / 135 succeeded
Started2022-07-08 07:01
Elapsed47m41s
Revision9671d7dd3d4a83444b68e0f41097401a861e0f21
Refs 104907

No Test Failures!


Show 135 Passed Tests

Error lines from build-log.txt

... skipping 75 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 164: bogus-expected-to-fail: command not found
!!! [0708 07:08:01] Call tree:
!!! [0708 07:08:01]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0708 07:08:01]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0708 07:08:01]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:140 juLog(...)
!!! [0708 07:08:01]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:168 record_command(...)
!!! [0708 07:08:01]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0708 07:08:01] Running kubeadm tests
+++ [0708 07:08:02] Building go targets for linux/amd64
    k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
+++ [0708 07:08:06] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kubeadm (static)
+++ [0708 07:08:53] Building go targets for linux/amd64
... skipping 220 lines ...
    k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
+++ [0708 07:12:14] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kube-controller-manager (static)
+++ [0708 07:12:46] Generate kubeconfig for controller-manager
+++ [0708 07:12:46] Starting controller-manager
I0708 07:12:47.637178   56991 serving.go:348] Generated self-signed cert in-memory
W0708 07:12:48.619058   56991 authentication.go:423] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0708 07:12:48.619098   56991 authentication.go:317] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0708 07:12:48.619108   56991 authentication.go:341] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0708 07:12:48.619170   56991 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0708 07:12:48.619190   56991 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0708 07:12:48.619239   56991 controllermanager.go:178] Version: v1.25.0-alpha.2.167+b63a01d01db99e
I0708 07:12:48.619307   56991 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0708 07:12:48.620835   56991 secure_serving.go:210] Serving securely on [::]:10257
I0708 07:12:48.620995   56991 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0708 07:12:48.621239   56991 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 66 lines ...
I0708 07:12:48.675725   56991 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
I0708 07:12:48.676604   56991 controllermanager.go:597] Started "csrsigning"
I0708 07:12:48.676751   56991 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
I0708 07:12:48.676776   56991 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
I0708 07:12:48.676814   56991 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key"
I0708 07:12:48.676853   56991 node_lifecycle_controller.go:77] Sending events to api server
E0708 07:12:48.676905   56991 core.go:211] failed to start cloud node lifecycle controller: no cloud provider provided
W0708 07:12:48.676925   56991 controllermanager.go:575] Skipping "cloud-node-lifecycle"
I0708 07:12:48.677292   56991 controllermanager.go:597] Started "pvc-protection"
I0708 07:12:48.677387   56991 pvc_protection_controller.go:103] "Starting PVC protection controller"
I0708 07:12:48.677411   56991 shared_informer.go:255] Waiting for caches to sync for PVC protection
W0708 07:12:48.677546   56991 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0708 07:12:48.677601   56991 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 88 lines ...
I0708 07:12:48.696233   56991 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps
I0708 07:12:48.696275   56991 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps
I0708 07:12:48.696328   56991 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0708 07:12:48.696358   56991 controllermanager.go:597] Started "resourcequota"
I0708 07:12:48.696700   56991 controllermanager.go:597] Started "job"
I0708 07:12:48.696871   56991 controllermanager.go:597] Started "csrcleaner"
E0708 07:12:48.697174   56991 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0708 07:12:48.697201   56991 controllermanager.go:575] Skipping "service"
I0708 07:12:48.699078   56991 resource_quota_controller.go:274] Starting resource quota controller
I0708 07:12:48.699105   56991 shared_informer.go:255] Waiting for caches to sync for resource quota
I0708 07:12:48.699345   56991 job_controller.go:190] Starting job controller
I0708 07:12:48.699362   56991 shared_informer.go:255] Waiting for caches to sync for job
I0708 07:12:48.699387   56991 cleaner.go:82] Starting CSR cleaner controller
... skipping 47 lines ...
I0708 07:12:49.084788   56991 shared_informer.go:262] Caches are synced for persistent volume
I0708 07:12:49.100096   56991 shared_informer.go:262] Caches are synced for resource quota
I0708 07:12:49.482802   56991 shared_informer.go:262] Caches are synced for garbage collector
I0708 07:12:49.482840   56991 garbagecollector.go:166] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0708 07:12:49.528068   56991 shared_informer.go:262] Caches are synced for garbage collector
node/127.0.0.1 created
W0708 07:12:49.926057   56991 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [0708 07:12:49] Checking kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.2.167+b63a01d01db99e", GitCommit:"b63a01d01db99e869d86127c11b96f7675e59149", GitTreeState:"clean", BuildDate:"2022-07-08T06:57:47Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.2.167+b63a01d01db99e", GitCommit:"b63a01d01db99e869d86127c11b96f7675e59149", GitTreeState:"clean", BuildDate:"2022-07-08T06:57:47Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   43s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 196 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0708 07:12:56] Creating namespace namespace-1657264376-4639
namespace/namespace-1657264376-4639 created
Context "test" modified.
+++ [0708 07:12:56] Testing RESTMapper
+++ [0708 07:12:56] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 60 lines ...
namespace/namespace-1657264384-30371 created
Context "test" modified.
+++ [0708 07:13:04] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1657264395-29018 created
Context "test" modified.
+++ [0708 07:13:15] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 439 lines ...
has:valid-pod
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I0708 07:13:30.796991   61769 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I0708 07:13:30.799115   61769 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3Ddb861960-7fa6-414d-8498-9f46a597d20a&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 242 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:3.7:
(BSuccessful
(Bmessage:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0708 07:13:49] "kubectl patch with resourceVersion 596" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
(Bmessage:kubectl-replace
has:kubectl-replace
Successful
(Bmessage:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
(Bmessage:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0708 07:13:50.883399   56991 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:655: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 29 lines ...
spec:
  containers:
  - image: registry.k8s.io/pause:3.7
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 84 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0708 07:14:02] Creating namespace namespace-1657264442-13889
namespace/namespace-1657264442-13889 created
Context "test" modified.
+++ [0708 07:14:02] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 63 lines ...
	If true, keep the managedFields when printing objects in JSON or YAML format.

    --template='':
	Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

    --validate='strict':
	Must be one of: strict (or true), warn, ignore (or false). 		"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. 		"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. 		"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.

    --windows-line-endings=false:
	Only relevant if --edit=true. Defaults to the line ending native to your platform.

Usage:
  kubectl create -f FILENAME [options]
... skipping 38 lines ...
I0708 07:14:05.842840   56991 event.go:294] "Event occurred" object="namespace-1657264442-2262/test-deployment-retainkeys-54bb65fd55" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-54bb65fd55-rpzwc"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0708 07:14:07.009529   65472 helpers.go:650] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
pod/test-pod created (dry run)
pod/test-pod created (dry run)
... skipping 29 lines ...
(Bpod/b created
apply.sh:208: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:209: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
(Bmessage:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
I0708 07:14:16.564072   56991 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1657264439-11708
pod/a created
pod/b created
I0708 07:14:16.739009   53360 alloc.go:327] "allocated clusterIPs" service="namespace-1657264442-2262/prune-svc" clusterIPs=map[IPv4:10.0.0.117]
service/prune-svc created
... skipping 37 lines ...
apply.sh:262: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
apply.sh:266: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
Successful
(Bmessage:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:277: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:281: Successful get services a {{.metadata.name}}: a
(BSuccessful
(Bmessage:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 28 lines ...
(Bapply.sh:303: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:304: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
(Bmessage:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:312: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
(Bmessage:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:320: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
I0708 07:14:46.795382   56991 namespace_controller.go:185] Namespace has been deleted nsb
apply.sh:326: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:configmap/foo created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:332: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:338: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
... skipping 6 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:350: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Widget" in version "example.com/v1"
Successful
(Bmessage:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0708 07:14:55.145593   53360 controller.go:622] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
apply.sh:359: Successful get widget foo {{.metadata.name}}: foo
... skipping 32 lines ...
(Bmessage:872
has:872
pod "test-pod" deleted
apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0708 07:14:58] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 75 lines ...
(Bpod "nginx-extensions" deleted
Successful
(Bmessage:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
(Bmessage:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0708 07:15:03] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I0708 07:15:07.113448   56991 event.go:294] "Event occurred" object="namespace-1657264504-2531/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-8568cdb9b4 to 3"
I0708 07:15:07.126281   56991 event.go:294] "Event occurred" object="namespace-1657264504-2531/nginx-8568cdb9b4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8568cdb9b4-5zj9r"
I0708 07:15:07.136618   56991 event.go:294] "Event occurred" object="namespace-1657264504-2531/nginx-8568cdb9b4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8568cdb9b4-nl62j"
I0708 07:15:07.136651   56991 event.go:294] "Event occurred" object="namespace-1657264504-2531/nginx-8568cdb9b4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8568cdb9b4-wdzd2"
apps.sh:154: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
(Bmessage:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1657264504-2531\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"registry.k8s.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1657264504-2531"
for: "hack/testdata/deployment-label-change2.yaml": error when patching "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0708 07:15:15.701991   56991 event.go:294] "Event occurred" object="namespace-1657264504-2531/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-896b58f99 to 3"
I0708 07:15:15.712070   56991 event.go:294] "Event occurred" object="namespace-1657264504-2531/nginx-896b58f99" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-896b58f99-djwvt"
I0708 07:15:15.721277   56991 event.go:294] "Event occurred" object="namespace-1657264504-2531/nginx-896b58f99" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-896b58f99-gtfd9"
I0708 07:15:15.721610   56991 event.go:294] "Event occurred" object="namespace-1657264504-2531/nginx-896b58f99" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-896b58f99-759lv"
Successful
... skipping 503 lines ...
+++ [0708 07:15:29] Creating namespace namespace-1657264529-5303
namespace/namespace-1657264529-5303 created
Context "test" modified.
+++ [0708 07:15:29] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:{
    "apiVersion": "v1",
    "items": [],
... skipping 21 lines ...
has not:No resources found
Successful
(Bmessage:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1657264529-5303 namespace.
has:No resources found
Successful
(Bmessage:
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1657264529-5303 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
(Bmessage:Error from server (NotFound): pods "abc" not found
has not:List
Successful
(Bmessage:I0708 07:15:31.458571   68993 loader.go:372] Config loaded from file:  /tmp/tmp.Yn7mJhriOZ/.kube/config
I0708 07:15:31.464315   68993 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
I0708 07:15:31.503520   68993 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0708 07:15:31.505410   68993 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 596 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
(Bmessage:valid-pod:
has:valid-pod:
Successful
(Bmessage:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2022-07-08T07:15:39Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2022-07-08T07:15:39Z"}}, "name":"valid-pod", "namespace":"namespace-1657264539-9961", "resourceVersion":"1052", "uid":"e512167b-a1e6-4707-bf37-fa168f5fbdee"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"registry.k8s.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
(Bmessage:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2022-07-08T07:15:39Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2022-07-08T07:15:39Z"}],"name":"valid-pod","namespace":"namespace-1657264539-9961","resourceVersion":"1052","uid":"e512167b-a1e6-4707-bf37-fa168f5fbdee"},"spec":{"containers":[{"image":"registry.k8s.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2022-07-08T07:15:39Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2022-07-08T07:15:39Z]] name:valid-pod namespace:namespace-1657264539-9961 resourceVersion:1052 uid:e512167b-a1e6-4707-bf37-fa168f5fbdee] spec:map[containers:[map[image:registry.k8s.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
(Bmessage:Error from server (NotFound): the server could not find the requested resource
has:the server could not find the requested resource
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:STATUS
Successful
... skipping 78 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
(Bmessage:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:204: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 159 lines ...
+++ [0708 07:15:53] Creating namespace namespace-1657264553-23799
namespace/namespace-1657264553-23799 created
Context "test" modified.
+++ [0708 07:15:53] Testing kubectl exec POD COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0708 07:15:54] Creating namespace namespace-1657264554-1873
namespace/namespace-1657264554-1873 created
Context "test" modified.
+++ [0708 07:15:54] Testing kubectl exec TYPE/NAME COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0708 07:15:55.286201   56991 event.go:294] "Event occurred" object="namespace-1657264554-1873/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-r25xm"
I0708 07:15:55.296088   56991 event.go:294] "Event occurred" object="namespace-1657264554-1873/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-kw9tj"
I0708 07:15:55.296118   56991 event.go:294] "Event occurred" object="namespace-1657264554-1873/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-2pn7l"
configmap/test-set-env-config created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-2pn7l does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-2pn7l does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(Bmessage:user-specified
has:user-specified
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(B{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"cfdb9582-0992-4ae8-a6fd-2fd99c051b9a","resourceVersion":"1149","creationTimestamp":"2022-07-08T07:15:56Z"}}
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"cfdb9582-0992-4ae8-a6fd-2fd99c051b9a","resourceVersion":"1150","creationTimestamp":"2022-07-08T07:15:56Z"},"data":{"key1":"config1"}}
has:uid
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"cfdb9582-0992-4ae8-a6fd-2fd99c051b9a","resourceVersion":"1150","creationTimestamp":"2022-07-08T07:15:56Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"cfdb9582-0992-4ae8-a6fd-2fd99c051b9a"}}
Successful
(Bmessage:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 26 lines ...
+++ [0708 07:15:58] Creating namespace namespace-1657264558-26491
namespace/namespace-1657264558-26491 created
I0708 07:15:58.344572   56991 namespace_controller.go:185] Namespace has been deleted test-events
Context "test" modified.
+++ [0708 07:15:58] Testing kubectl create --validate
Successful
message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
has either:strict decoding error
or:error validating data
+++ [0708 07:15:58] Testing kubectl create --validate=true
Successful
message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
has either:strict decoding error
or:error validating data
+++ [0708 07:15:58] Testing kubectl create --validate=false
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
has:deployment.apps/invalid-nginx-deployment created
I0708 07:15:58.847792   56991 event.go:294] "Event occurred" object="namespace-1657264558-26491/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-5fdd67897d to 4"
I0708 07:15:58.867195   56991 event.go:294] "Event occurred" object="namespace-1657264558-26491/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-hwx5k"
I0708 07:15:58.877400   56991 event.go:294] "Event occurred" object="namespace-1657264558-26491/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-8hps5"
I0708 07:15:58.878817   56991 event.go:294] "Event occurred" object="namespace-1657264558-26491/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-dzn8c"
I0708 07:15:58.916468   56991 event.go:294] "Event occurred" object="namespace-1657264558-26491/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-rzjqp"
deployment.apps "invalid-nginx-deployment" deleted
+++ [0708 07:15:58] Testing kubectl create --validate=strict
E0708 07:15:58.959502   56991 replica_set.go:550] sync "namespace-1657264558-26491/invalid-nginx-deployment-5fdd67897d" failed with replicasets.apps "invalid-nginx-deployment-5fdd67897d" not found
Successful
message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
has either:strict decoding error
or:error validating data
+++ [0708 07:15:59] Testing kubectl create --validate=warn
W0708 07:15:59.299676   70659 schema.go:146] cannot perform warn validation if server-side field validation is unsupported, skipping validation
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
has:deployment.apps/invalid-nginx-deployment created
I0708 07:15:59.319296   56991 event.go:294] "Event occurred" object="namespace-1657264558-26491/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-5fdd67897d to 4"
... skipping 11 lines ...
I0708 07:15:59.551992   56991 event.go:294] "Event occurred" object="namespace-1657264558-26491/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-fjgjd"
I0708 07:15:59.552569   56991 event.go:294] "Event occurred" object="namespace-1657264558-26491/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-z9d6r"
I0708 07:15:59.563162   56991 event.go:294] "Event occurred" object="namespace-1657264558-26491/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-h5j2l"
deployment.apps "invalid-nginx-deployment" deleted
+++ [0708 07:15:59] Testing kubectl create
Successful
message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
has either:strict decoding error
or:error validating data
+++ [0708 07:15:59] Testing kubectl create --validate=foo
Successful
(Bmessage:error: invalid - validate option "foo"; must be one of: strict (or true), warn, ignore (or false)
has:invalid - validate option "foo"
+++ exit code: 0
Recording: run_convert_tests
Running command: run_convert_tests

+++ Running case: test-cmd.run_convert_tests 
... skipping 50 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:apps/v1beta1
deployment.apps "nginx" deleted
Successful
(Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
Successful
(Bmessage:nginx:
has:nginx:
+++ exit code: 0
Recording: run_kubectl_delete_allnamespaces_tests
... skipping 103 lines ...
has:Timeout
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
(Bmessage:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 149 lines ...
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:296: Successful get foos/test {{.patched}}: value2
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:298: Successful get foos/test {{.patched}}: <no value>
(B+++ [0708 07:16:13] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 325 lines ...
(Bcrd.sh:519: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:524: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:527: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
Recording: run_recursive_resources_tests
... skipping 4 lines ...
+++ command: run_recursive_resources_tests
+++ [0708 07:16:33] Testing recursive resources
+++ [0708 07:16:33] Creating namespace namespace-1657264593-12935
namespace/namespace-1657264593-12935 created
Context "test" modified.
W0708 07:16:33.854612   53360 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0708 07:16:33.856245   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0708 07:16:34.061314   53360 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0708 07:16:34.062990   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0708 07:16:34.280278   53360 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0708 07:16:34.281883   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0708 07:16:34.499940   53360 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0708 07:16:34.502078   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0708 07:16:34.857401   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:34.857458   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
(Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0708 07:16:35.265295   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:35.265339   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0708 07:16:35.570443   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:35.570475   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0708 07:16:35.785733   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:35.785769   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
(Bmessage:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:Name:         busybox0
Namespace:    namespace-1657264593-12935
Priority:     0
Node:         <none>
... skipping 159 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
(Bmessage:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
(Bmessage:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox0 configured
Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
(Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
(Bmessage:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
I0708 07:16:37.498570   56991 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0708 07:16:37.788878   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:37.788916   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
(Bmessage:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0708 07:16:37.956982   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:37.957027   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0708 07:16:38.371918   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:38.371969   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0708 07:16:38.453279   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-6fkgn"
I0708 07:16:38.463860   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-klw46"
generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0708 07:16:38.581632   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:38.581676   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(Bgeneric-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(BSuccessful
(Bmessage:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0708 07:16:39.844254   53360 alloc.go:327] "allocated clusterIPs" service="namespace-1657264593-12935/busybox0" clusterIPs=map[IPv4:10.0.0.233]
I0708 07:16:39.890533   53360 alloc.go:327] "allocated clusterIPs" service="namespace-1657264593-12935/busybox1" clusterIPs=map[IPv4:10.0.0.58]
generic-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
(Bmessage:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0708 07:16:40.560689   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-smx28"
I0708 07:16:40.583405   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-vs2nw"
generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
(Bmessage:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0708 07:16:41.482030   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/nginx1-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-57ccd55748 to 2"
I0708 07:16:41.492498   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/nginx1-deployment-57ccd55748" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-57ccd55748-5xx9k"
I0708 07:16:41.492636   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/nginx0-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-865c5f9cf5 to 2"
I0708 07:16:41.502904   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/nginx1-deployment-57ccd55748" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-57ccd55748-z2ktn"
I0708 07:16:41.502941   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/nginx0-deployment-865c5f9cf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-865c5f9cf5-7gqrf"
I0708 07:16:41.514013   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/nginx0-deployment-865c5f9cf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-865c5f9cf5-bn9xk"
generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9:
(Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9:
(BSuccessful
(Bmessage:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
W0708 07:16:42.126299   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:42.126348   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
(Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment resumed
deployment.apps/nginx0-deployment resumed
W0708 07:16:42.397928   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:42.397964   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:384: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
(BSuccessful
(Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
W0708 07:16:43.439053   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:43.439088   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Waiting for deployment "nginx1-deployment" rollout to finish
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
W0708 07:16:43.766569   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:43.766613   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 18 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
(Bmessage:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
(Bmessage:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
generic-resources.sh:411: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0708 07:16:47.187371   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-5sf45"
I0708 07:16:47.199477   56991 event.go:294] "Event occurred" object="namespace-1657264593-12935/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-gjs5x"
generic-resources.sh:415: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
... skipping 3 lines ...
(Bmessage:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
+++ exit code: 0
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0708 07:16:48] Testing kubectl(v1:namespaces)
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
I0708 07:16:49.175623   56991 shared_informer.go:255] Waiting for caches to sync for resource quota
I0708 07:16:49.175686   56991 shared_informer.go:262] Caches are synced for resource quota
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1471: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BI0708 07:16:49.586804   56991 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0708 07:16:49.586877   56991 shared_informer.go:262] Caches are synced for garbage collector
query for namespaces had limit param
query for resourcequotas had limit param
query for limitranges had limit param
W0708 07:16:49.921400   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:49.921440   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
query for namespaces had user-specified limit param
Successful describe namespaces verbose logs:
I0708 07:16:49.469068   74673 loader.go:372] Config loaded from file:  /tmp/tmp.Yn7mJhriOZ/.kube/config
I0708 07:16:49.474784   74673 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
I0708 07:16:49.509671   74673 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces?limit=500 200 OK in 3 milliseconds
I0708 07:16:49.520704   74673 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default 200 OK in 1 milliseconds
... skipping 126 lines ...
I0708 07:16:49.709049   74673 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1657264593-12935/resourcequotas?limit=500 200 OK in 1 milliseconds
I0708 07:16:49.710440   74673 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1657264593-12935/limitranges?limit=500 200 OK in 1 milliseconds
I0708 07:16:49.712138   74673 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb 200 OK in 1 milliseconds
I0708 07:16:49.713600   74673 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb/resourcequotas?limit=500 200 OK in 1 milliseconds
I0708 07:16:49.714863   74673 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb/limitranges?limit=500 200 OK in 1 milliseconds
(Bnamespace "my-namespace" deleted
W0708 07:16:50.870952   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:50.870998   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0708 07:16:52.870137   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:52.870178   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0708 07:16:53.961052   56991 horizontal.go:360] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1657264593-12935
I0708 07:16:53.970212   56991 horizontal.go:360] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1657264593-12935
namespace/my-namespace condition met
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1482: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BW0708 07:16:55.714992   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:16:55.715034   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1657264372-27266" deleted
namespace "namespace-1657264372-27980" deleted
... skipping 31 lines ...
namespace "namespace-1657264561-6837" deleted
namespace "namespace-1657264562-19662" deleted
namespace "namespace-1657264564-6277" deleted
namespace "namespace-1657264566-26050" deleted
namespace "namespace-1657264593-12935" deleted
namespace "nsb" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:Warning: deleting cluster-scoped resources
Successful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1657264372-27266" deleted
... skipping 32 lines ...
namespace "namespace-1657264561-6837" deleted
namespace "namespace-1657264562-19662" deleted
namespace "namespace-1657264564-6277" deleted
namespace "namespace-1657264566-26050" deleted
namespace "namespace-1657264593-12935" deleted
namespace "nsb" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1489: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1490: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
... skipping 15 lines ...
core.sh:1515: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1519: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1523: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1525: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
(Bmessage:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1532: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1536: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
I0708 07:17:05.397124   56991 namespace_controller.go:185] Namespace has been deleted my-namespace
I0708 07:17:06.140560   56991 namespace_controller.go:185] Namespace has been deleted namespace-1657264372-27266
W0708 07:17:06.151217   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:17:06.151270   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0708 07:17:06.176639   56991 namespace_controller.go:185] Namespace has been deleted namespace-1657264373-17705
I0708 07:17:06.223380   56991 namespace_controller.go:185] Namespace has been deleted namespace-1657264372-27980
I0708 07:17:06.272822   56991 namespace_controller.go:185] Namespace has been deleted namespace-1657264376-4639
I0708 07:17:06.320081   56991 namespace_controller.go:185] Namespace has been deleted namespace-1657264384-30371
I0708 07:17:06.355562   56991 namespace_controller.go:185] Namespace has been deleted namespace-1657264400-21757
I0708 07:17:06.355585   56991 namespace_controller.go:185] Namespace has been deleted namespace-1657264405-15212
... skipping 94 lines ...
I0708 07:17:11.496179   75301 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-secrets/secrets/test-secret 200 OK in 1 milliseconds
(Bsecret "test-secret" deleted
core.sh:856: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:860: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:861: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(BW0708 07:17:12.167823   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:17:12.167865   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
core.sh:871: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:875: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:876: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(Bsecret "test-secret" deleted
... skipping 12 lines ...
(Bcore.sh:921: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:930: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
I0708 07:17:15.329092   56991 namespace_controller.go:185] Namespace has been deleted other
W0708 07:17:16.188426   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:17:16.188463   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0708 07:17:17.031594   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:17:17.031644   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 43 lines ...
+++ command: run_client_config_tests
+++ [0708 07:17:27] Creating namespace namespace-1657264647-21620
namespace/namespace-1657264647-21620 created
Context "test" modified.
+++ [0708 07:17:27] Testing client config
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
(Bmessage:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
(Bmessage:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
(Bmessage:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "vendor/k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
(Bmessage:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 26 lines ...
I0708 07:17:29.071069   76492 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/serviceaccounts/test-service-account 200 OK in 1 milliseconds
I0708 07:17:29.072622   76492 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/secrets?limit=500 200 OK in 1 milliseconds
I0708 07:17:29.074157   76492 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/events?fieldSelector=involvedObject.name%3Dtest-service-account%2CinvolvedObject.namespace%3Dtest-service-accounts%2CinvolvedObject.kind%3DServiceAccount%2CinvolvedObject.uid%3D41df9db0-e35b-4803-a94a-0f0c526007d0&limit=500 200 OK in 1 milliseconds
(Bserviceaccount "test-service-account" deleted
namespace "test-service-accounts" deleted
I0708 07:17:32.155779   56991 namespace_controller.go:185] Namespace has been deleted test-configmaps
W0708 07:17:32.991501   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:17:32.991540   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_job_tests
Running command: run_job_tests

+++ Running case: test-cmd.run_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 19 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 55 lines ...
Annotations:      batch.kubernetes.io/job-tracking: 
                  cronjob.kubernetes.io/instantiate: manual
Parallelism:      1
Completions:      1
Completion Mode:  NonIndexed
Start Time:       Fri, 08 Jul 2022 07:17:36 +0000
Pods Statuses:    1 Active (0 Ready) / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=b1384d45-ae17-4780-ad98-d23ee865b5e3
           job-name=test-job
  Containers:
   pi:
    Image:      registry.k8s.io/perl
... skipping 43 lines ...
Context "test" modified.
I0708 07:17:42.711852   56991 job_controller.go:504] enqueueing job namespace-1657264662-20073/test-job
job.batch/test-job created
I0708 07:17:42.724474   56991 event.go:294] "Event occurred" object="namespace-1657264662-20073/test-job" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-job-d7wvm"
I0708 07:17:42.724498   56991 job_controller.go:504] enqueueing job namespace-1657264662-20073/test-job
I0708 07:17:42.735331   56991 job_controller.go:504] enqueueing job namespace-1657264662-20073/test-job
W0708 07:17:42.765820   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:17:42.765856   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
create.sh:94: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: registry.k8s.io/nginx:test-cmd
(BI0708 07:17:42.930273   56991 job_controller.go:504] enqueueing job namespace-1657264662-20073/test-job
job.batch "test-job" deleted
E0708 07:17:42.930454   56991 tracking_utils.go:109] "deleting tracking annotation UID expectations" err="couldn't create key for object namespace-1657264662-20073/test-job: could not find key for obj \"namespace-1657264662-20073/test-job\"" job="namespace-1657264662-20073/test-job"
I0708 07:17:43.007957   56991 job_controller.go:504] enqueueing job namespace-1657264662-20073/test-job-pi
job.batch/test-job-pi created
... skipping 418 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
(Bmessage:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1034: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
(Bmessage:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1047: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1054: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1058: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BI0708 07:17:48.836923   53360 alloc.go:327] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.0.0.83]
... skipping 141 lines ...
(Bapps.sh:90: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bapps.sh:91: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:2.0:
(Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
(Bmessage:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:2.0:
(Bapps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
W0708 07:17:59.003989   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:17:59.004061   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:103: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:latest:
(Bapps.sh:104: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bapps.sh:105: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
... skipping 32 lines ...
Namespace:    namespace-1657264679-9757
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1657264679-9757
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1657264679-9757
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1657264679-9757
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1657264679-9757
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1657264679-9757
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1657264679-9757
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1657264679-9757
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 25 lines ...
(Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0708 07:18:01.797463   56991 replica_set.go:224] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1657264679-9757  f10ab83f-e74f-496a-bf65-126572ad5ef7 2186 2 2022-07-08 07:18:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2022-07-08 07:18:00 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2022-07-08 07:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ecb458 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0708 07:18:01.844924   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-n5ljx"
core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1252: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1256: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0708 07:18:02.414677   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-9grgx"
core.sh:1260: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1264: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 30 lines ...
core.sh:1288: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
(Bdeployment.apps "nginx-deployment" deleted
I0708 07:18:04.591885   53360 alloc.go:327] "allocated clusterIPs" service="namespace-1657264679-9757/expose-test-deployment" clusterIPs=map[IPv4:10.0.0.97]
Successful
(Bmessage:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
W0708 07:18:04.684788   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:18:04.684848   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "expose-test-deployment" deleted
Successful
(Bmessage:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0708 07:18:05.075326   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-58f46b58b6 to 3"
I0708 07:18:05.085549   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-8wc4z"
I0708 07:18:05.096158   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-9zm6g"
I0708 07:18:05.096512   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-sk4vg"
... skipping 24 lines ...
(Bpod "valid-pod" deleted
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
Successful
(Bmessage:error: cannot expose a Node
has:cannot expose
Successful
(Bmessage:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
I0708 07:18:07.761513   53360 alloc.go:327] "allocated clusterIPs" service="namespace-1657264679-9757/kubernetes-serve-hostname-testing-sixty-three-characters-in-len" clusterIPs=map[IPv4:10.0.0.106]
Successful
... skipping 32 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1403: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1407: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1416: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0708 07:18:11.534841   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-58c5566cd5 to 3"
I0708 07:18:11.545798   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources-58c5566cd5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-58c5566cd5-2v2jk"
I0708 07:18:11.555686   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources-58c5566cd5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-58c5566cd5-5rxd2"
I0708 07:18:11.555722   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources-58c5566cd5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-58c5566cd5-v4mbw"
core.sh:1422: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1423: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bcore.sh:1424: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0708 07:18:11.981549   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-8699d87649 to 1"
I0708 07:18:11.994396   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources-8699d87649" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-8699d87649-7d9gh"
core.sh:1427: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1428: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0708 07:18:12.450273   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-58c5566cd5 to 2 from 3"
I0708 07:18:12.502040   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-fdc5df4b7 to 1 from 0"
I0708 07:18:12.511205   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources-58c5566cd5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-58c5566cd5-2v2jk"
I0708 07:18:12.511249   56991 event.go:294] "Event occurred" object="namespace-1657264679-9757/nginx-deployment-resources-fdc5df4b7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-fdc5df4b7-fscx2"
core.sh:1433: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 155 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1444: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1445: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1446: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
                pod-template-hash=54b46c846
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=54b46c846
  Containers:
   nginx:
    Image:        registry.k8s.io/nginx:test-cmd
... skipping 60 lines ...
apps.sh:236: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/deployment-with-unixuserid created
I0708 07:18:16.529890   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/deployment-with-unixuserid" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set deployment-with-unixuserid-95945856c to 1"
I0708 07:18:16.542657   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/deployment-with-unixuserid-95945856c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-with-unixuserid-95945856c-wll69"
apps.sh:240: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
(Bdeployment.apps "deployment-with-unixuserid" deleted
W0708 07:18:16.774018   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:18:16.774060   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:247: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I0708 07:18:17.105621   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-58f46b58b6 to 3"
I0708 07:18:17.116805   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-cx47g"
I0708 07:18:17.127897   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-z526x"
I0708 07:18:17.128265   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-7lmzv"
... skipping 51 lines ...
apps.sh:311: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(B    Image:	registry.k8s.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:315: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:319: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:322: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:326: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
W0708 07:18:24.786304   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:18:24.786355   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0708 07:18:25.140792   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-7d59d7599d to 2 from 3"
I0708 07:18:25.200205   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-7d59d7599d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-7d59d7599d-sk4c8"
I0708 07:18:25.200314   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-57c4cb5b4f to 1 from 0"
I0708 07:18:25.211002   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-57c4cb5b4f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-57c4cb5b4f-6s45r"
I0708 07:18:25.259138   56991 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1657264679-9757
... skipping 81 lines ...
(Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0708 07:18:28.314627   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-d77bf5fc6 to 1"
I0708 07:18:28.327775   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment-d77bf5fc6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-d77bf5fc6-zthhh"
apps.sh:373: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bapps.sh:374: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:379: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bapps.sh:380: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:383: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(BW0708 07:18:29.301103   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:18:29.301140   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:384: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bapps.sh:387: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bapps.sh:388: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0708 07:18:29.797486   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-695b6596bd to 2 from 3"
I0708 07:18:29.871371   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment-695b6596bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-695b6596bd-wsvl5"
... skipping 53 lines ...
Warning: key password transferred to PASSWORD
Warning: key username transferred to USERNAME
deployment.apps/nginx-deployment env updated
I0708 07:18:33.367063   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment-858c8bf66d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-858c8bf66d-8k4mb"
deployment.apps/nginx-deployment env updated
Successful
(Bmessage:error: standard input cannot be used for multiple arguments
has:standard input cannot be used for multiple arguments
I0708 07:18:33.618815   56991 event.go:294] "Event occurred" object="namespace-1657264693-25479/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-5d46db6669 to 0 from 1"
deployment.apps "nginx-deployment" deleted
E0708 07:18:33.759134   56991 replica_set.go:550] sync "namespace-1657264693-25479/nginx-deployment-5f6bd49dcf" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5f6bd49dcf": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1657264693-25479/nginx-deployment-5f6bd49dcf, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a0a7468c-f54f-4708-a5fd-47ee1375b2d1, UID in object meta: 
configmap "test-set-env-config" deleted
E0708 07:18:33.803874   56991 replica_set.go:550] sync "namespace-1657264693-25479/nginx-deployment-695b6596bd" failed with replicasets.apps "nginx-deployment-695b6596bd" not found
secret "test-set-env-secret" deleted
E0708 07:18:33.904518   56991 replica_set.go:550] sync "namespace-1657264693-25479/nginx-deployment-587bbfd4b9" failed with replicasets.apps "nginx-deployment-587bbfd4b9" not found
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests
E0708 07:18:33.954222   56991 replica_set.go:550] sync "namespace-1657264693-25479/nginx-deployment-5d46db6669" failed with replicasets.apps "nginx-deployment-5d46db6669" not found

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0708 07:18:33] Creating namespace namespace-1657264713-18890
E0708 07:18:34.004248   56991 replica_set.go:550] sync "namespace-1657264693-25479/nginx-deployment-bfbd4c9bf" failed with replicasets.apps "nginx-deployment-bfbd4c9bf" not found
namespace/namespace-1657264713-18890 created
Context "test" modified.
+++ [0708 07:18:34] Testing kubectl(v1:replicasets)
apps.sh:553: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
+++ [0708 07:18:34] Deleting rs
I0708 07:18:34.436248   56991 event.go:294] "Event occurred" object="namespace-1657264713-18890/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-wf6wl"
I0708 07:18:34.475891   56991 event.go:294] "Event occurred" object="namespace-1657264713-18890/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-k97gx"
I0708 07:18:34.475929   56991 event.go:294] "Event occurred" object="namespace-1657264713-18890/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-6rw24"
I0708 07:18:34.502974   56991 horizontal.go:360] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1657264693-25479
replicaset.apps "frontend" deleted
E0708 07:18:34.615312   56991 replica_set.go:550] sync "namespace-1657264713-18890/frontend" failed with replicasets.apps "frontend" not found
apps.sh:559: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0708 07:18:34.697290   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:18:34.697334   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:563: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0708 07:18:34.999964   56991 event.go:294] "Event occurred" object="namespace-1657264713-18890/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-8hgn5"
I0708 07:18:35.010433   56991 event.go:294] "Event occurred" object="namespace-1657264713-18890/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-f7fhh"
I0708 07:18:35.010475   56991 event.go:294] "Event occurred" object="namespace-1657264713-18890/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-xzvgs"
apps.sh:567: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0708 07:18:35] Deleting rs
replicaset.apps "frontend" deleted
E0708 07:18:35.253815   56991 replica_set.go:550] sync "namespace-1657264713-18890/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1657264713-18890/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3b89580f-ae80-4ac4-a8e8-350894fc2666, UID in object meta: 
apps.sh:571: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:573: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-8hgn5" deleted
pod "frontend-f7fhh" deleted
pod "frontend-xzvgs" deleted
apps.sh:576: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1657264713-18890
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1657264713-18890
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1657264713-18890
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1657264713-18890
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1657264713-18890
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1657264713-18890
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1657264713-18890
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1657264713-18890
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 225 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:716: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(BSuccessful
(Bmessage:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 74 lines ...
(Bapps.sh:475: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/pause:2.0:
(Bapps.sh:476: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:479: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.7:
(Bapps.sh:480: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
(Bmessage:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:484: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.7:
(Bapps.sh:485: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:488: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.8:
(Bapps.sh:489: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/pause:2.0:
... skipping 63 lines ...
Name:         mock
Namespace:    namespace-1657264732-31438
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.7
    Port:         9949/TCP
... skipping 61 lines ...
Name:         mock
Namespace:    namespace-1657264732-31438
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.7
    Port:         9949/TCP
... skipping 61 lines ...
Name:         mock
Namespace:    namespace-1657264732-31438
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.7
    Port:         9949/TCP
... skipping 43 lines ...
Namespace:    namespace-1657264732-31438
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.7
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1657264732-31438
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.7
    Port:         9949/TCP
... skipping 120 lines ...
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0708 07:19:09.316769   56991 pv_protection_controller.go:114] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
E0708 07:19:09.338079   56991 pv_protection_controller.go:114] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bquery for persistentvolumes had limit param
query for events had limit param
query for persistentvolumes had user-specified limit param
Successful describe persistentvolumes verbose logs:
I0708 07:19:09.490270   86827 loader.go:372] Config loaded from file:  /tmp/tmp.Yn7mJhriOZ/.kube/config
I0708 07:19:09.496660   86827 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
I0708 07:19:09.526945   86827 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes?limit=500 200 OK in 2 milliseconds
I0708 07:19:09.529741   86827 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes/pv0003 200 OK in 1 milliseconds
I0708 07:19:09.540632   86827 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dpv0003%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPersistentVolume%2CinvolvedObject.uid%3Df0f1ff39-6550-4178-a9b9-e1e7df405fa9&limit=500 200 OK in 9 milliseconds
(Bpersistentvolume "pv0003" deleted
storage.sh:44: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0708 07:19:10.178072   56991 pv_protection_controller.go:114] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:47: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:Warning: deleting cluster-scoped resources
Successful
... skipping 6 lines ...
Running command: run_persistent_volume_claims_tests

+++ Running case: test-cmd.run_persistent_volume_claims_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_persistent_volume_claims_tests
+++ [0708 07:19:10] Creating namespace namespace-1657264750-26576
W0708 07:19:10.717764   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:19:10.717807   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1657264750-26576 created
Context "test" modified.
+++ [0708 07:19:10] Testing persistent volumes claims
storage.sh:66: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolumeclaim/myclaim-1 created
I0708 07:19:11.156597   56991 event.go:294] "Event occurred" object="namespace-1657264750-26576/myclaim-1" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
... skipping 13 lines ...
(Bpersistentvolumeclaim "myclaim-1" deleted
I0708 07:19:11.585628   56991 event.go:294] "Event occurred" object="namespace-1657264750-26576/myclaim-1" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
persistentvolumeclaim/myclaim-2 created
I0708 07:19:11.908623   56991 event.go:294] "Event occurred" object="namespace-1657264750-26576/myclaim-2" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
I0708 07:19:11.918012   56991 event.go:294] "Event occurred" object="namespace-1657264750-26576/myclaim-2" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
storage.sh:75: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-2:
(BW0708 07:19:12.093190   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:19:12.093232   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0708 07:19:12.122681   56991 event.go:294] "Event occurred" object="namespace-1657264750-26576/myclaim-2" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
persistentvolumeclaim "myclaim-2" deleted
W0708 07:19:12.291271   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:19:12.291315   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0708 07:19:12.445440   56991 event.go:294] "Event occurred" object="namespace-1657264750-26576/myclaim-3" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
persistentvolumeclaim/myclaim-3 created
I0708 07:19:12.455875   56991 event.go:294] "Event occurred" object="namespace-1657264750-26576/myclaim-3" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
storage.sh:79: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-3:
(Bpersistentvolumeclaim "myclaim-3" deleted
I0708 07:19:12.686446   56991 event.go:294] "Event occurred" object="namespace-1657264750-26576/myclaim-3" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
... skipping 43 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 08 Jul 2022 07:12:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 08 Jul 2022 07:12:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 35 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 08 Jul 2022 07:12:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 08 Jul 2022 07:12:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 42 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 08 Jul 2022 07:12:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 08 Jul 2022 07:12:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 08 Jul 2022 07:12:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 08 Jul 2022 07:12:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 08 Jul 2022 07:12:49 +0000   Fri, 08 Jul 2022 07:13:53 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 61 lines ...
+++ [0708 07:19:16] exec credential plugin not triggered since kubectl was called with provided --username/--password
certificatesigningrequest.certificates.k8s.io/testuser created
authentication.sh:152: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: 
(Bcertificatesigningrequest.certificates.k8s.io/testuser approved
authentication.sh:154: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: Approved
(Bauthentication.sh:156: Successful get csr/testuser {{.status.certificate}}: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMxakNDQWI2Z0F3SUJBZ0lRUWxhcm8vb1VtRCtHcEtNNC9RNGFwVEFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFEREFreE1qY3VNQzR3TGpFd0hoY05Nakl3TnpBNE1EY3hOREUyV2hjTk1qTXdOekE0TURjeApOREUyV2pBVE1SRXdEd1lEVlFRREV3aDBaWE4wZFhObGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQCkFEQ0NBUW9DZ2dFQkFMUXhaWE1YZEFMM1oxQkxNRCswR1ZyZWlSMDJSSTNKMzVKdWZNVTNFZ2FhZWIvbTlLRFYKUFp2ekxIcXpRaGN2TlFyYVZwajF3SFhhZEwzcWJkaFg3dlZnVldkZUVhbUJ6RlNWQ0J5di9LU1p3eVhuamtkZwpGR3psM2FiKy90eDlpT3EyYTY4WEFSNUI1WElqY09sWXNTN2NNVUpxelZPa0FHY09DQTV3c043VmdTTUdDam1FCjJXaU9jcTQ2b3JNbDhoYU9OdUgvQjYrL0ZFeG43bDkyTGlLVnBLSGNuWGVTYSthK3VFUGFlZnBhcFIyT0ZLd0wKeWc1T2FmTnViVkUxdW1BN3JEdElKdE9uOFRGdmRnaStyVlBxYk9LajgzUTNoVkNCTGE1RDdyb1JHSlVIV0lZZgo0bm1OS21GeWJ5eGdDM0JkREozNERkM3JYQ2pUd0ZwN01GVUNBd0VBQWFNbE1DTXdFd1lEVlIwbEJBd3dDZ1lJCkt3WUJCUVVIQXdJd0RBWURWUjBUQVFIL0JBSXdBREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBVmIySzVIZngKdU5raTBQeDVhZVoweHNaYkQ4U3VweEJXS09YVGF4ZlBPRTlVUWlGQWRZZFRWNHprZHhXd002TndHS09mYzZOWQpTK3lYWlpsblVTc21tV0RwRFNFaWt6QkR2dUFTYnlWYlBZLy9ONnhWODdrZGJ5amZLdEo5UjNFblk0bmtaNS9lCjYyT3N5UjZSMFA5NXpuOFBhM3RJdTNrUmFSNTNwMzZZbmZwMDVvdk1sZHpiSlZsdWlrMWJtdkIyQXBQZ3B5UmEKOWpEdXZ1SS84K1NMbHpiZVFOczEvcE9GV2NSNGl5ZGdqblZCZ1NyQ2tUZjZjMTRYVWdLd3U5RUlHb3RCVytkegoyaU9aWVRPL3phMm5OdnlKMitYams4MnBRb2E0NStxRmFST2RQM0lhMFRaNlVDYm9Cc1dyQUFUbGZyU3NjVWVCClpSRW4rSFFZZkpGQ3ZRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
(BW0708 07:19:17.135842   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:19:17.135881   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [0708 07:19:17] exec credential plugin not triggered since kubectl was called with provided --client-certificate/--client-key
User "testuser" set.
+++ [0708 07:19:17] exec credential plugin not triggered since kubeconfig was configured with --client-certificate/--client-key for authentication
certificatesigningrequest.certificates.k8s.io "testuser" deleted
+++ [0708 07:19:17] Testing kubectl with configured client.authentication.k8s.io/v1 exec credentials plugin
+++ [0708 07:19:17] exec credential plugin not triggered since kubectl was called with provided --token
... skipping 99 lines ...
yes
has:the server doesn't have a resource type
Successful
(Bmessage:yes
has:yes
Successful
(Bmessage:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
(BSuccessful
(Bmessage:yes
0
has:0
... skipping 62 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:870: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:871: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:872: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:873: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
(Bmessage:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
Warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 500 lines ...
namespace-1657264750-26576   default   0         29s
namespace-1657264764-31187   default   0         15s
namespace-1657264766-16845   default   0         13s
some-other-random            default   0         16s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
W0708 07:19:42.883924   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:19:42.883967   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "all-ns-test-2" deleted
W0708 07:19:49.643031   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:19:49.643073   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0708 07:19:50.091435   56991 namespace_controller.go:185] Namespace has been deleted all-ns-test-1
get.sh:400: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:404: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:408: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:
... skipping 17 lines ...
(Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind
No resources found in namespace-1657264766-16845 namespace.
has:example.com/v1beta1 DeprecatedKind is deprecated
Successful
(Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind
No resources found in namespace-1657264766-16845 namespace.
error: 1 warning received
has:example.com/v1beta1 DeprecatedKind is deprecated
Successful
(Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind
No resources found in namespace-1657264766-16845 namespace.
error: 1 warning received
has:error: 1 warning received
customresourcedefinition.apiextensions.k8s.io "deprecated.example.com" deleted
+++ exit code: 0
Recording: run_template_output_tests
Running command: run_template_output_tests

+++ Running case: test-cmd.run_template_output_tests 
... skipping 279 lines ...
pod "cassandra-qmrks" deleted
pod "deploy-748954c8bb-l97cp" deleted
I0708 07:19:59.038174   56991 event.go:294] "Event occurred" object="namespace-1657264792-27961/deploy-748954c8bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deploy-748954c8bb-w6d8k"
pod "valid-pod" deleted
replicationcontroller "cassandra" deleted
clusterrole.rbac.authorization.k8s.io "myclusterrole" deleted
W0708 07:19:59.322967   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:19:59.323012   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
clusterrolebinding.rbac.authorization.k8s.io "foo" deleted
deployment.apps "deploy" deleted
+++ exit code: 0
Recording: run_certificates_tests
Running command: run_certificates_tests

... skipping 164 lines ...
}
certificate.sh:51: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Denied
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificate.sh:53: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: 
(Bcertificatesigningrequest.certificates.k8s.io/foo created
certificate.sh:56: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: 
(BW0708 07:20:02.643737   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:20:02.643773   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
certificatesigningrequest.certificates.k8s.io/foo denied
{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "certificates.k8s.io/v1",
... skipping 97 lines ...
node/127.0.0.1 cordoned (server dry run)
Warning: deleting Pods that declare no controller: namespace-1657264803-4451/test-pod-1
evicting pod namespace-1657264803-4451/test-pod-1 (server dry run)
node/127.0.0.1 drained (server dry run)
node-management.sh:140: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BWarning: deleting Pods that declare no controller: namespace-1657264803-4451/test-pod-1
W0708 07:20:25.729460   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:20:25.729496   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0708 07:20:38.682487   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:20:38.682527   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:node/127.0.0.1 cordoned
evicting pod namespace-1657264803-4451/test-pod-1
pod "test-pod-1" has DeletionTimestamp older than 1 seconds, skipping
node/127.0.0.1 drained
has:evicting pod .*/test-pod-1
... skipping 14 lines ...
(Bmessage:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:161: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:166: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
(Bmessage:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
node-management.sh:172: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(Bnode-management.sh:174: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode-management.sh:176: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BSuccessful
... skipping 78 lines ...
Warning: deleting Pods that declare no controller: namespace-1657264803-4451/test-pod-1, namespace-1657264803-4451/test-pod-2
evicting pod namespace-1657264803-4451/test-pod-1 (dry run)
evicting pod namespace-1657264803-4451/test-pod-2 (dry run)
node/127.0.0.1 drained (dry run)
has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK
Successful
(Bmessage:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
(Bmessage:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 18 lines ...
+++ [0708 07:20:41] Testing kubectl plugins
Successful
(Bmessage:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
(Bmessage:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
(Bmessage:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
(Bmessage:Unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
(Bmessage:I am plugin foo
has:plugin foo
Successful
(Bmessage:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 13 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0708 07:20:42] Testing impersonation
Successful
(Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
Successful
(Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:60: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:61: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 19 lines ...
I0708 07:20:44.197893   56991 event.go:294] "Event occurred" object="namespace-1657264843-16416/test-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-1-858d6766c9 to 1"
I0708 07:20:44.216877   56991 event.go:294] "Event occurred" object="namespace-1657264843-16416/test-1-858d6766c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-1-858d6766c9-mtm4l"
deployment.apps/test-2 created
I0708 07:20:44.291833   56991 event.go:294] "Event occurred" object="namespace-1657264843-16416/test-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-2-8bd5b8858 to 1"
I0708 07:20:44.302700   56991 event.go:294] "Event occurred" object="namespace-1657264843-16416/test-2-8bd5b8858" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-2-8bd5b8858-szxcm"
wait.sh:36: Successful get deployments {{range .items}}{{.metadata.name}},{{end}}: test-1,test-2,
(BW0708 07:20:45.656583   56991 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0708 07:20:45.656623   56991 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "test-1" deleted
deployment.apps "test-2" deleted
Successful
(Bmessage:deployment.apps/test-1 condition met
deployment.apps/test-2 condition met
has:test-1 condition met
... skipping 82 lines ...
    k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
+++ [0708 07:21:04] Running tests without code coverage 
{"Time":"2022-07-08T07:24:45.586312214Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apimachinery","Output":"ok  \tk8s.io/kubernetes/test/integration/apimachinery\t69.593s\n"}
{"Time":"2022-07-08T07:26:09.756039401Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/certreload","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/certreload\t77.959s\n"}
{"Time":"2022-07-08T07:26:37.897736932Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/admissionwebhook\t167.609s\n"}
{"Time":"2022-07-08T07:27:53.703585201Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/openapi","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/openapi\t69.717s\n"}
{"Time":"2022-07-08T07:27:59.944180424Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestWatchCacheUpdatedByEtcd","Output":"WARNING: 2022/07/08 07:27:59 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:36939 127.0.0.1:36939 \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:36939: connect: connection refused\". Reconnecting...\n"}
{"Time":"2022-07-08T07:28:01.012939356Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver\t246.463s\n"}
{"Time":"2022-07-08T07:28:06.508801692Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/podlogs","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/podlogs\t7.046s\n"}
{"Time":"2022-07-08T07:28:15.3370458Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/tracing","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/tracing\t8.574s\n"}
{"Time":"2022-07-08T07:28:26.404105029Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/apply\t275.417s\n"}
{"Time":"2022-07-08T07:28:34.329907381Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/flowcontrol","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/flowcontrol\t138.426s\n"}
{"Time":"2022-07-08T07:28:46.924106548Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/configmap","Output":"ok  \tk8s.io/kubernetes/test/integration/configmap\t6.225s\n"}
... skipping 28 lines ...