This job view page is being replaced by Spyglass soon. Check out the new job view.
PRnilo19: Automated cherry pick of #92599: Delete default load balancer source range (0.0.0.0/0) to prevent redundant network security rules.
ResultFAILURE
Tests 0 failed / 70 succeeded
Started2020-06-30 10:13
Elapsed18m2s
Revisiona25d965e78cea95881767fe8c37ab6a275d790a8
Refs 92641
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ce383686-1c8a-4f6c-9050-4bb61259c94d/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/ce383686-1c8a-4f6c-9050-4bb61259c94d/targets/test

No Test Failures!


Show 70 Passed Tests

Error lines from build-log.txt

... skipping 621 lines ...
stat /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/third_party/etcd/Documentation/README.md: no such file or directory
+++ [0630 10:28:58] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0630 10:30:00] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0630 10:30:01.302321   53156 serving.go:319] Generated self-signed cert in-memory
W0630 10:30:02.092008   53156 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0630 10:30:02.092045   53156 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0630 10:30:02.092052   53156 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0630 10:30:02.092066   53156 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0630 10:30:02.092079   53156 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0630 10:30:02.092114   53156 controllermanager.go:161] Version: v1.16.13-rc.0.7+07ba67d6694a1c
I0630 10:30:02.093816   53156 secure_serving.go:123] Serving securely on [::]:10257
I0630 10:30:02.094846   53156 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0630 10:30:02.094945   53156 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...
I0630 10:30:02.123787   53156 leaderelection.go:251] successfully acquired lease kube-system/kube-controller-manager
... skipping 31 lines ...
I0630 10:30:02.374562   53156 namespace_controller.go:186] Starting namespace controller
I0630 10:30:02.374581   53156 shared_informer.go:197] Waiting for caches to sync for namespace
I0630 10:30:02.795345   53156 controllermanager.go:534] Started "garbagecollector"
I0630 10:30:02.795669   53156 garbagecollector.go:130] Starting garbage collector controller
I0630 10:30:02.795695   53156 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0630 10:30:02.795986   53156 graph_builder.go:282] GraphBuilder running
E0630 10:30:02.802062   53156 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0630 10:30:02.802190   53156 controllermanager.go:526] Skipping "service"
I0630 10:30:02.802903   53156 controllermanager.go:534] Started "persistentvolume-expander"
I0630 10:30:02.804062   53156 expand_controller.go:300] Starting expand controller
I0630 10:30:02.804079   53156 shared_informer.go:197] Waiting for caches to sync for expand
I0630 10:30:02.812112   53156 controllermanager.go:534] Started "serviceaccount"
I0630 10:30:02.814523   53156 controllermanager.go:534] Started "deployment"
... skipping 59 lines ...
I0630 10:30:03.040175   53156 shared_informer.go:197] Waiting for caches to sync for job
I0630 10:30:03.040638   53156 controllermanager.go:534] Started "horizontalpodautoscaling"
I0630 10:30:03.041072   53156 controllermanager.go:534] Started "disruption"
I0630 10:30:03.041084   53156 horizontal.go:156] Starting HPA controller
I0630 10:30:03.041097   53156 shared_informer.go:197] Waiting for caches to sync for HPA
I0630 10:30:03.041376   53156 node_lifecycle_controller.go:77] Sending events to api server
E0630 10:30:03.041405   53156 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0630 10:30:03.041415   53156 controllermanager.go:526] Skipping "cloud-node-lifecycle"
I0630 10:30:03.041793   53156 controllermanager.go:534] Started "clusterrole-aggregation"
I0630 10:30:03.041863   53156 disruption.go:330] Starting disruption controller
I0630 10:30:03.043100   53156 shared_informer.go:197] Waiting for caches to sync for disruption
I0630 10:30:03.041900   53156 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0630 10:30:03.043130   53156 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
I0630 10:30:03.043614   53156 controllermanager.go:534] Started "pvc-protection"
I0630 10:30:03.044836   53156 pvc_protection_controller.go:100] Starting PVC protection controller
I0630 10:30:03.045305   53156 shared_informer.go:197] Waiting for caches to sync for PVC protection
I0630 10:30:03.159275   53156 shared_informer.go:204] Caches are synced for certificate 
I0630 10:30:03.161381   53156 shared_informer.go:204] Caches are synced for TTL 
I0630 10:30:03.178640   53156 shared_informer.go:204] Caches are synced for namespace 
node/127.0.0.1 created
W0630 10:30:03.215487   53156 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0630 10:30:03.243835   53156 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
E0630 10:30:03.273344   53156 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0630 10:30:03.274205   53156 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
+++ [0630 10:30:03] Checking kubectl version
I0630 10:30:03.315194   53156 shared_informer.go:204] Caches are synced for service account 
I0630 10:30:03.316853   53156 shared_informer.go:204] Caches are synced for deployment 
I0630 10:30:03.317459   53156 shared_informer.go:204] Caches are synced for ReplicaSet 
I0630 10:30:03.319410   49599 controller.go:606] quota admission added evaluator for: serviceaccounts
I0630 10:30:03.329326   53156 shared_informer.go:204] Caches are synced for daemon sets 
... skipping 103 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0630 10:30:14] Creating namespace namespace-1593513014-16646
namespace/namespace-1593513014-16646 created
Context "test" modified.
+++ [0630 10:30:15] Testing RESTMapper
+++ [0630 10:30:16] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 590 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 12 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
Name:         env-test-pod
Namespace:    test-kubectl-describe-pod
Priority:     0
... skipping 90 lines ...
(Bcore.sh:328: Successful get pod valid-pod {{range.items}}{{.metadata.annotations}}:{{end}}: 
(Bpod/valid-pod labeled
core.sh:332: Successful get pod valid-pod {{range.metadata.annotations}}{{.}}:{{end}}: :kubectl label pods valid-pod record-change=true --record=true --server=http://127.0.0.1:8080 --match-server-version=true:
(Bpod/valid-pod labeled
core.sh:338: Successful get pod valid-pod {{range.metadata.annotations}}{{.}}:{{end}}: :kubectl label pods valid-pod record-change=true --record=true --server=http://127.0.0.1:8080 --match-server-version=true:
(Bpod/valid-pod labeled
{"component":"entrypoint","file":"prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2020-06-30T10:31:53Z"}
core.sh:345: Successful get pod valid-pod {{range.metadata.annotations}}{{.}}:{{end}}: :kubectl label pods valid-pod new-record-change=true --record=true --server=http://127.0.0.1:8080 --match-server-version=true:
(B