This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-17 00:04
Elapsed39m28s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

Docker in Docker enabled, initializing...
================================================================================
Starting Docker: docker.
Waiting for docker to be ready, sleeping for 1 seconds.
================================================================================
Done setting up docker in docker.
ERROR: (gcloud.auth.activate-service-account) There was a problem refreshing your current auth tokens: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})
Please run:

  $ gcloud auth login

to obtain new credentials.

... skipping 222 lines ...
I0817 00:05:20.160859    6268 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0817 00:05:20.160884    6268 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/0840a1bb-1dc0-11ed-a994-1ec15ee5f766"
I0817 00:05:20.220289    6268 app.go:128] ID for this run: "0840a1bb-1dc0-11ed-a994-1ec15ee5f766"
I0817 00:05:20.220897    6268 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519
I0817 00:05:20.230034    6268 dumplogs.go:45] /tmp/kops.yVcCcoXmF toolbox dump --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0817 00:05:20.230081    6268 local.go:42] ⚙️ /tmp/kops.yVcCcoXmF toolbox dump --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0817 00:05:20.711213    6268 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0817 00:05:20.711260    6268 down.go:48] /tmp/kops.yVcCcoXmF delete cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --yes
I0817 00:05:20.711270    6268 local.go:42] ⚙️ /tmp/kops.yVcCcoXmF delete cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --yes
I0817 00:05:20.743770    6290 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0817 00:05:20.743859    6290 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-c668105d47-ed761.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-88-gf442cc2d0a
... skipping 14 lines ...
I0817 00:05:22.549746    6327 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0817 00:05:22.549775    6327 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/0840a1bb-1dc0-11ed-a994-1ec15ee5f766"
I0817 00:05:22.592176    6327 app.go:128] ID for this run: "0840a1bb-1dc0-11ed-a994-1ec15ee5f766"
I0817 00:05:22.592554    6327 up.go:44] Cleaning up any leaked resources from previous cluster
I0817 00:05:22.592780    6327 dumplogs.go:45] /tmp/kops.8bgYzU2lS toolbox dump --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0817 00:05:22.592856    6327 local.go:42] ⚙️ /tmp/kops.8bgYzU2lS toolbox dump --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0817 00:05:23.104292    6327 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0817 00:05:23.104336    6327 down.go:48] /tmp/kops.8bgYzU2lS delete cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --yes
I0817 00:05:23.104352    6327 local.go:42] ⚙️ /tmp/kops.8bgYzU2lS delete cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --yes
I0817 00:05:23.141144    6347 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0817 00:05:23.141241    6347 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-c668105d47-ed761.test-cncf-aws.k8s.io" not found
I0817 00:05:23.599825    6327 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/17 00:05:23 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0817 00:05:23.612831    6327 http.go:37] curl https://ip.jsb.workers.dev
I0817 00:05:23.984069    6327 up.go:159] /tmp/kops.8bgYzU2lS create cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.20.6 --ssh-public-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 104.154.38.161/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-1a --master-size c5.large
I0817 00:05:23.984112    6327 local.go:42] ⚙️ /tmp/kops.8bgYzU2lS create cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.20.6 --ssh-public-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 104.154.38.161/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-1a --master-size c5.large
I0817 00:05:24.018549    6358 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0817 00:05:24.018645    6358 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0817 00:05:24.037737    6358 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 555 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:06:10.793129    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:06:20.833674    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:06:30.911122    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:06:40.949869    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:06:50.991075    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:07:01.022941    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:07:11.061628    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:07:21.100844    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:07:31.144237    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:07:41.178806    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:07:51.218163    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:08:01.271397    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:08:11.310361    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:08:21.343888    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:08:31.380874    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:08:41.421972    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:08:51.472641    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:09:01.513259    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:09:11.559397    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:09:21.593472    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:09:31.626904    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:09:41.682453    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:09:51.724246    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0817 00:10:01.774078    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 15 lines ...
Pod	kube-system/calico-node-842lq						system-node-critical pod "calico-node-842lq" is pending
Pod	kube-system/calico-node-k2g64						system-node-critical pod "calico-node-k2g64" is pending
Pod	kube-system/coredns-696464bdb8-48z5p					system-cluster-critical pod "coredns-696464bdb8-48z5p" is pending
Pod	kube-system/coredns-autoscaler-6658b4bf85-9n2qg				system-cluster-critical pod "coredns-autoscaler-6658b4bf85-9n2qg" is pending
Pod	kube-system/kube-proxy-ip-172-20-58-36.eu-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-58-36.eu-west-1.compute.internal" is pending

Validation Failed
W0817 00:10:15.618737    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 14 lines ...
Pod	kube-system/calico-node-6wvnb			system-node-critical pod "calico-node-6wvnb" is pending
Pod	kube-system/calico-node-842lq			system-node-critical pod "calico-node-842lq" is pending
Pod	kube-system/calico-node-k2g64			system-node-critical pod "calico-node-k2g64" is pending
Pod	kube-system/coredns-696464bdb8-48z5p		system-cluster-critical pod "coredns-696464bdb8-48z5p" is pending
Pod	kube-system/coredns-autoscaler-6658b4bf85-9n2qg	system-cluster-critical pod "coredns-autoscaler-6658b4bf85-9n2qg" is pending

Validation Failed
W0817 00:10:27.578791    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 11 lines ...
Pod	kube-system/calico-node-6wvnb		system-node-critical pod "calico-node-6wvnb" is not ready (calico-node)
Pod	kube-system/calico-node-842lq		system-node-critical pod "calico-node-842lq" is not ready (calico-node)
Pod	kube-system/calico-node-k2g64		system-node-critical pod "calico-node-k2g64" is not ready (calico-node)
Pod	kube-system/coredns-696464bdb8-48z5p	system-cluster-critical pod "coredns-696464bdb8-48z5p" is not ready (coredns)
Pod	kube-system/coredns-696464bdb8-rhqm8	system-cluster-critical pod "coredns-696464bdb8-rhqm8" is not ready (coredns)

Validation Failed
W0817 00:10:39.705731    6396 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 509 lines ...
evicting pod kube-system/dns-controller-6cdd4f9c8c-bjgtx
evicting pod kube-system/calico-kube-controllers-57fd98f9d9-cpffl
I0817 00:15:36.081679    6508 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0817 00:15:41.085767    6508 instancegroups.go:591] Stopping instance "i-0ee406f880c8ce002", node "ip-172-20-58-138.eu-west-1.compute.internal", in group "master-eu-west-1a.masters.e2e-c668105d47-ed761.test-cncf-aws.k8s.io" (this may take a while).
I0817 00:15:41.324443    6508 instancegroups.go:436] waiting for 15s after terminating instance
I0817 00:15:56.333943    6508 instancegroups.go:470] Validating the cluster.
I0817 00:16:26.386036    6508 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.250.222.95:443: i/o timeout.
I0817 00:17:26.437144    6508 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.250.222.95:443: i/o timeout.
I0817 00:18:26.480129    6508 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.250.222.95:443: i/o timeout.
I0817 00:19:26.516979    6508 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.250.222.95:443: i/o timeout.
I0817 00:20:26.569092    6508 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.250.222.95:443: i/o timeout.
I0817 00:21:26.602419    6508 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.250.222.95:443: i/o timeout.
I0817 00:21:59.500452    6508 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-51-194.eu-west-1.compute.internal" of role "master" is not ready.
I0817 00:22:31.535450    6508 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-node-critical pod "calico-node-4mj8j" is not ready (calico-node).
I0817 00:23:03.574306    6508 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-57fd98f9d9-jkq57" is not ready (calico-kube-controllers).
I0817 00:23:36.093490    6508 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0817 00:23:48.068686    6508 instancegroups.go:503] Cluster validated.
I0817 00:23:48.068764    6508 instancegroups.go:470] Validating the cluster.
... skipping 108 lines ...
I0817 00:39:43.714570    6557 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/0840a1bb-1dc0-11ed-a994-1ec15ee5f766"
I0817 00:39:43.717715    6557 app.go:128] ID for this run: "0840a1bb-1dc0-11ed-a994-1ec15ee5f766"
I0817 00:39:43.717851    6557 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=v1.21.7 --parallel 25
I0817 00:39:43.737197    6573 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0817 00:39:43.744325    6573 kubectl.go:148] gsutil cp gs://kubernetes-release/release/v1.21.7/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
PreconditionException: 412 The type of authentication token used for this request requires that Uniform Bucket Level Access be enabled.
F0817 00:39:45.669028    6573 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release v1.21.7: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-c668105d47-ed761.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.yVcCcoXmF --down
I0817 00:39:45.703765    6761 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0817 00:39:45.704897    6761 app.go:61] The files in RunDir shall not be part of Artifacts
I0817 00:39:45.704922    6761 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0817 00:39:45.704945    6761 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/0840a1bb-1dc0-11ed-a994-1ec15ee5f766"
... skipping 320 lines ...