This job view page is being replaced by Spyglass soon. Check out the new job view.
PRwzshiming: Graceful Node Shutdown Based On Pod Priority
ResultABORTED
Tests 0 failed / 62 succeeded
Started2021-10-27 03:17
Elapsed17m27s
Revision351c5baf28b9f1302d709ff9fedbb01080d232bc
Refs 102915

No Test Failures!


Show 62 Passed Tests

Error lines from build-log.txt

... skipping 76 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 157: bogus-expected-to-fail: command not found
!!! [1027 03:22:05] Call tree:
!!! [1027 03:22:05]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [1027 03:22:05]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [1027 03:22:05]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:133 juLog(...)
!!! [1027 03:22:05]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:161 record_command(...)
!!! [1027 03:22:05]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [1027 03:22:05] Running kubeadm tests
+++ [1027 03:22:09] Building go targets for linux/amd64:
    cmd/kubeadm
> static build CGO_ENABLED=0: k8s.io/kubernetes/cmd/kubeadm
+++ [1027 03:23:05] Running tests without code coverage 
{"Time":"2021-10-27T03:23:54.056219642Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t46.183s\n"}
... skipping 202 lines ...
+++ [1027 03:26:22] Building go targets for linux/amd64:
    cmd/kube-controller-manager
> static build CGO_ENABLED=0: k8s.io/kubernetes/cmd/kube-controller-manager
+++ [1027 03:26:52] Generate kubeconfig for controller-manager
+++ [1027 03:26:52] Starting controller-manager
I1027 03:26:52.584308   56853 serving.go:348] Generated self-signed cert in-memory
W1027 03:26:53.398528   56853 authentication.go:419] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1027 03:26:53.398586   56853 authentication.go:316] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W1027 03:26:53.398594   56853 authentication.go:340] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W1027 03:26:53.398608   56853 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1027 03:26:53.398651   56853 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I1027 03:26:53.398731   56853 controllermanager.go:188] Version: v1.23.0-alpha.3.570+6aad46c6b2f3ad
I1027 03:26:53.400178   56853 secure_serving.go:200] Serving securely on [::]:10257
I1027 03:26:53.400310   56853 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1027 03:26:53.400491   56853 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
+++ [1027 03:26:53] On try 2, controller-manager: ok
... skipping 18 lines ...
I1027 03:26:53.501576   56853 controllermanager.go:597] Started "cronjob"
W1027 03:26:53.501588   56853 controllermanager.go:575] Skipping "csrsigning"
I1027 03:26:53.501687   56853 cronjob_controllerv2.go:125] "Starting cronjob controller v2"
I1027 03:26:53.501707   56853 shared_informer.go:240] Waiting for caches to sync for cronjob
W1027 03:26:53.501856   56853 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1027 03:26:53.501927   56853 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
E1027 03:26:53.501961   56853 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1027 03:26:53.501971   56853 controllermanager.go:575] Skipping "service"
W1027 03:26:53.502144   56853 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1027 03:26:53.502205   56853 controllermanager.go:597] Started "clusterrole-aggregation"
I1027 03:26:53.502372   56853 controllermanager.go:597] Started "podgc"
W1027 03:26:53.502842   56853 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1027 03:26:53.503299   56853 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
... skipping 115 lines ...
I1027 03:26:53.532072   56853 pv_controller_base.go:308] Starting persistent volume controller
I1027 03:26:53.532098   56853 shared_informer.go:240] Waiting for caches to sync for persistent volume
I1027 03:26:53.532439   56853 controllermanager.go:597] Started "endpointslice"
I1027 03:26:53.532478   56853 endpointslice_controller.go:257] Starting endpoint slice controller
I1027 03:26:53.532495   56853 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I1027 03:26:53.532686   56853 node_lifecycle_controller.go:76] Sending events to api server
E1027 03:26:53.532724   56853 core.go:211] failed to start cloud node lifecycle controller: no cloud provider provided
W1027 03:26:53.532736   56853 controllermanager.go:575] Skipping "cloud-node-lifecycle"
I1027 03:26:53.533008   56853 controllermanager.go:597] Started "endpoint"
I1027 03:26:53.533171   56853 endpoints_controller.go:193] Starting endpoint controller
I1027 03:26:53.533209   56853 shared_informer.go:240] Waiting for caches to sync for endpoint
I1027 03:26:53.533419   56853 controllermanager.go:597] Started "persistentvolume-expander"
W1027 03:26:53.533438   56853 controllermanager.go:562] "tokencleaner" is disabled
... skipping 50 lines ...
I1027 03:26:53.911226   56853 shared_informer.go:247] Caches are synced for PV protection 
I1027 03:26:53.932745   56853 shared_informer.go:247] Caches are synced for persistent volume 
I1027 03:26:53.933819   56853 shared_informer.go:247] Caches are synced for expand 
I1027 03:26:53.936022   56853 shared_informer.go:247] Caches are synced for attach detach 
I1027 03:26:54.358201   56853 shared_informer.go:247] Caches are synced for garbage collector 
node/127.0.0.1 created
W1027 03:26:54.429623   56853 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [1027 03:26:54] Checking kubectl version
I1027 03:26:54.438929   56853 shared_informer.go:247] Caches are synced for garbage collector 
I1027 03:26:54.438976   56853 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.0-alpha.3.570+6aad46c6b2f3ad", GitCommit:"6aad46c6b2f3ad08b75ccb5051f5c038019acbc7", GitTreeState:"clean", BuildDate:"2021-10-27T02:10:38Z", GoVersion:"go1.17.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.0-alpha.3.570+6aad46c6b2f3ad", GitCommit:"6aad46c6b2f3ad08b75ccb5051f5c038019acbc7", GitTreeState:"clean", BuildDate:"2021-10-27T02:10:38Z", GoVersion:"go1.17.2", Compiler:"gc", Platform:"linux/amd64"}
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   37s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 100 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [1027 03:26:59] Creating namespace namespace-1635305219-5552
namespace/namespace-1635305219-5552 created
Context "test" modified.
+++ [1027 03:26:59] Testing RESTMapper
+++ [1027 03:27:00] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 61 lines ...
namespace/namespace-1635305230-10306 created
Context "test" modified.
+++ [1027 03:27:10] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1635305240-10237 created
Context "test" modified.
+++ [1027 03:27:20] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 440 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I1027 03:27:35.820054   61570 round_trippers.go:541] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I1027 03:27:35.822121   61570 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3D95a3f4ef-bd83-48c5-b4ff-395d499564c4&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 240 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.6:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [1027 03:27:54] "kubectl patch with resourceVersion 610" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W1027 03:27:56.134916   56853 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:655: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 29 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:3.6
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 83 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [1027 03:28:07] Creating namespace namespace-1635305287-20061
namespace/namespace-1635305287-20061 created
Context "test" modified.
+++ [1027 03:28:07] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 43 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [1027 03:28:07] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

+++ Running case: test-cmd.run_kubectl_apply_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 24 lines ...
I1027 03:28:10.587311   56853 event.go:294] "Event occurred" object="namespace-1635305288-32753/test-deployment-retainkeys-fdd7cb9fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-fdd7cb9fc-nd5zd"
apply.sh:69: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
(BI1027 03:28:11.254820   56853 event.go:294] "Event occurred" object="namespace-1635305288-32753/test-deployment-retainkeys" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set test-deployment-retainkeys-fdd7cb9fc to 0"
I1027 03:28:11.288341   56853 event.go:294] "Event occurred" object="namespace-1635305288-32753/test-deployment-retainkeys-fdd7cb9fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: test-deployment-retainkeys-fdd7cb9fc-nd5zd"
I1027 03:28:11.362405   56853 event.go:294] "Event occurred" object="namespace-1635305288-32753/test-deployment-retainkeys" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-deployment-retainkeys-8695b756f8 to 1"
I1027 03:28:11.370704   56853 event.go:294] "Event occurred" object="namespace-1635305288-32753/test-deployment-retainkeys-8695b756f8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-8695b756f8-zdhnd"
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-10-27T03:28:11Z"}
++ early_exit_handler
++ '[' -n 172 ']'
++ kill -TERM 172