This job view page is being replaced by Spyglass soon. Check out the new job view.
PRperithompson: Check Kubelet is running with correct Windows Permissions
ResultFAILURE
Tests 0 failed / 75 succeeded
Started2021-03-10 11:04
Elapsed12m3s
Revisionf1e2041b4e197aef1d21a4e499eb6392102d5f75
Refs 96616

No Test Failures!


Show 75 Passed Tests

Error lines from build-log.txt

... skipping 70 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 156: bogus-expected-to-fail: command not found
!!! [0310 11:10:21] Call tree:
!!! [0310 11:10:21]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0310 11:10:21]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0310 11:10:21]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:132 juLog(...)
!!! [0310 11:10:21]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:160 record_command(...)
!!! [0310 11:10:21]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0310 11:10:21] Running kubeadm tests
+++ [0310 11:10:26] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0310 11:11:20] Running tests without code coverage
{"Time":"2021-03-10T11:13:00.543558082Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t51.831s\n"}
✓  cmd/kubeadm/test/cmd (51.834s)
... skipping 380 lines ...
I0310 11:15:47.592639   59965 client.go:360] parsed scheme: "passthrough"
I0310 11:15:47.592707   59965 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0310 11:15:47.592720   59965 clientconn.go:948] ClientConn switching balancer to "pick_first"
+++ [0310 11:15:52] Generate kubeconfig for controller-manager
+++ [0310 11:15:52] Starting controller-manager
I0310 11:15:53.060569   63737 serving.go:347] Generated self-signed cert in-memory
W0310 11:15:53.746442   63737 authentication.go:410] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0310 11:15:53.746499   63737 authentication.go:307] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0310 11:15:53.746508   63737 authentication.go:331] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0310 11:15:53.746523   63737 authorization.go:216] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0310 11:15:53.746541   63737 authorization.go:184] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0310 11:15:53.746560   63737 controllermanager.go:175] Version: v1.21.0-beta.1.153+e3f0fb982a7c4d
I0310 11:15:53.748139   63737 secure_serving.go:197] Serving securely on [::]:10257
I0310 11:15:53.748209   63737 tlsconfig.go:240] Starting DynamicServingCertificateController
I0310 11:15:53.748828   63737 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0310 11:15:53.749423   63737 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 19 lines ...
I0310 11:15:54.340518   63737 cronjob_controllerv2.go:125] Starting cronjob controller v2
I0310 11:15:54.340541   63737 controllermanager.go:574] Started "ttl"
I0310 11:15:54.340544   63737 shared_informer.go:240] Waiting for caches to sync for cronjob
I0310 11:15:54.340786   63737 ttl_controller.go:121] Starting TTL controller
I0310 11:15:54.340823   63737 shared_informer.go:240] Waiting for caches to sync for TTL
I0310 11:15:54.340871   63737 node_lifecycle_controller.go:76] Sending events to api server
E0310 11:15:54.340922   63737 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W0310 11:15:54.340936   63737 controllermanager.go:566] Skipping "cloud-node-lifecycle"
W0310 11:15:54.341295   63737 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0310 11:15:54.341344   63737 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0310 11:15:54.341780   63737 controllermanager.go:574] Started "persistentvolume-expander"
I0310 11:15:54.341824   63737 expand_controller.go:327] Starting expand controller
I0310 11:15:54.341842   63737 shared_informer.go:240] Waiting for caches to sync for expand
... skipping 55 lines ...
I0310 11:15:54.356929   63737 node_lifecycle_controller.go:377] Sending events to api server.
I0310 11:15:54.357181   63737 taint_manager.go:163] "Sending events to api server"
I0310 11:15:54.357301   63737 node_lifecycle_controller.go:505] Controller will reconcile labels.
I0310 11:15:54.357341   63737 controllermanager.go:574] Started "nodelifecycle"
I0310 11:15:54.357529   63737 node_lifecycle_controller.go:539] Starting node controller
I0310 11:15:54.357553   63737 shared_informer.go:240] Waiting for caches to sync for taint
E0310 11:15:54.357841   63737 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0310 11:15:54.357867   63737 controllermanager.go:566] Skipping "service"
I0310 11:15:54.358262   63737 controllermanager.go:574] Started "ttl-after-finished"
I0310 11:15:54.358300   63737 ttlafterfinished_controller.go:109] Starting TTL after finished controller
I0310 11:15:54.358599   63737 shared_informer.go:240] Waiting for caches to sync for TTL after finished
I0310 11:15:54.358919   63737 controllermanager.go:574] Started "endpoint"
I0310 11:15:54.359017   63737 endpoints_controller.go:189] Starting endpoint controller
... skipping 100 lines ...
I0310 11:15:54.467702   63737 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I0310 11:15:54.484325   63737 shared_informer.go:247] Caches are synced for namespace 
I0310 11:15:54.485551   63737 shared_informer.go:247] Caches are synced for PVC protection 
I0310 11:15:54.486737   63737 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I0310 11:15:54.486803   63737 shared_informer.go:247] Caches are synced for HPA 
I0310 11:15:54.540650   63737 shared_informer.go:247] Caches are synced for cronjob 
W0310 11:15:54.541903   63737 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0310 11:15:54.562501   63737 shared_informer.go:247] Caches are synced for persistent volume 
I0310 11:15:54.569025   63737 shared_informer.go:247] Caches are synced for GC 
I0310 11:15:54.588506   63737 shared_informer.go:247] Caches are synced for endpoint_slice 
I0310 11:15:54.589725   63737 shared_informer.go:247] Caches are synced for attach detach 
I0310 11:15:54.640985   63737 shared_informer.go:247] Caches are synced for TTL 
I0310 11:15:54.659231   63737 shared_informer.go:247] Caches are synced for endpoint 
I0310 11:15:54.660437   63737 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I0310 11:15:54.666843   63737 shared_informer.go:247] Caches are synced for stateful set 
I0310 11:15:54.758348   63737 shared_informer.go:247] Caches are synced for taint 
I0310 11:15:54.758497   63737 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
I0310 11:15:54.758600   63737 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0310 11:15:54.758798   63737 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0310 11:15:54.758879   63737 event.go:291] "Event occurred" object="127.0.0.1" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller"
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocated ip:10.0.0.1 with error:provided IP is already allocated
I0310 11:15:54.787333   63737 shared_informer.go:247] Caches are synced for crt configmap 
I0310 11:15:54.840072   63737 shared_informer.go:247] Caches are synced for daemon sets 
I0310 11:15:54.857820   63737 shared_informer.go:247] Caches are synced for resource quota 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   38s
Recording: run_kubectl_version_tests
... skipping 104 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0310 11:15:59] Creating namespace namespace-1615374959-21822
namespace/namespace-1615374959-21822 created
Context "test" modified.
+++ [0310 11:15:59] Testing RESTMapper
+++ [0310 11:16:00] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 63 lines ...
namespace/namespace-1615374965-27643 created
Context "test" modified.
+++ [0310 11:16:05] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 29 lines ...
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1615374973-13244 namespace.
has:Role is deprecated
Successful
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1615374973-13244 namespace.
Error: 1 warning received
has:Role is deprecated
Successful
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1615374973-13244 namespace.
Error: 1 warning received
has:Error: 1 warning received
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:163: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:164: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:165: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
message:the server doesn't have a resource type "invalid-resource"
has:the server doesn't have a resource type "invalid-resource"
role.rbac.authorization.k8s.io/group-reader created
rbac.sh:170: Successful get role/group-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list:
(Brbac.sh:171: Successful get role/group-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: deployments:
(Brbac.sh:172: Successful get role/group-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: apps:
(B{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-03-10T11:16:15Z"}