This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-15 18:03
Elapsed39m48s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

Docker in Docker enabled, initializing...
================================================================================
Starting Docker: docker.
Waiting for docker to be ready, sleeping for 1 seconds.
================================================================================
Done setting up docker in docker.
ERROR: (gcloud.auth.activate-service-account) There was a problem refreshing your current auth tokens: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})
Please run:

  $ gcloud auth login

to obtain new credentials.

... skipping 222 lines ...
I0815 18:04:58.965480    6347 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0815 18:04:58.965508    6347 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/80d07e4e-1cc4-11ed-9820-1e463d3c6692"
I0815 18:04:59.001275    6347 app.go:128] ID for this run: "80d07e4e-1cc4-11ed-9820-1e463d3c6692"
I0815 18:04:59.001574    6347 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519
I0815 18:04:59.015623    6347 dumplogs.go:45] /tmp/kops.fdraziAzU toolbox dump --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0815 18:04:59.015677    6347 local.go:42] ⚙️ /tmp/kops.fdraziAzU toolbox dump --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0815 18:04:59.534433    6347 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0815 18:04:59.534477    6347 down.go:48] /tmp/kops.fdraziAzU delete cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --yes
I0815 18:04:59.534489    6347 local.go:42] ⚙️ /tmp/kops.fdraziAzU delete cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --yes
I0815 18:04:59.566555    6369 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0815 18:04:59.566676    6369 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-c668105d47-ed761.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-88-gf442cc2d0a
... skipping 14 lines ...
I0815 18:05:01.337467    6406 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0815 18:05:01.337499    6406 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/80d07e4e-1cc4-11ed-9820-1e463d3c6692"
I0815 18:05:01.353811    6406 app.go:128] ID for this run: "80d07e4e-1cc4-11ed-9820-1e463d3c6692"
I0815 18:05:01.353916    6406 up.go:44] Cleaning up any leaked resources from previous cluster
I0815 18:05:01.353949    6406 dumplogs.go:45] /tmp/kops.KTHvM0uz6 toolbox dump --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0815 18:05:01.354004    6406 local.go:42] ⚙️ /tmp/kops.KTHvM0uz6 toolbox dump --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0815 18:05:01.839011    6406 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0815 18:05:01.839067    6406 down.go:48] /tmp/kops.KTHvM0uz6 delete cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --yes
I0815 18:05:01.839078    6406 local.go:42] ⚙️ /tmp/kops.KTHvM0uz6 delete cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --yes
I0815 18:05:01.873943    6427 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0815 18:05:01.874066    6427 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-c668105d47-ed761.test-cncf-aws.k8s.io" not found
I0815 18:05:02.340203    6406 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/15 18:05:02 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0815 18:05:02.354357    6406 http.go:37] curl https://ip.jsb.workers.dev
I0815 18:05:02.536210    6406 up.go:159] /tmp/kops.KTHvM0uz6 create cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.20.6 --ssh-public-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 130.211.236.176/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0815 18:05:02.536278    6406 local.go:42] ⚙️ /tmp/kops.KTHvM0uz6 create cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.20.6 --ssh-public-key /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 130.211.236.176/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0815 18:05:02.569223    6437 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0815 18:05:02.569323    6437 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0815 18:05:02.589109    6437 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-c668105d47-ed761.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 545 lines ...
I0815 18:05:48.042744    6406 up.go:243] /tmp/kops.KTHvM0uz6 validate cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0815 18:05:48.042802    6406 local.go:42] ⚙️ /tmp/kops.KTHvM0uz6 validate cluster --name e2e-c668105d47-ed761.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0815 18:05:48.074494    6475 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0815 18:05:48.074606    6475 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-c668105d47-ed761.test-cncf-aws.k8s.io

W0815 18:05:49.312925    6475 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0815 18:05:59.346487    6475 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:06:09.384065    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:06:19.439548    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:06:29.475982    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:06:39.513938    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:06:49.560171    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:06:59.600902    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:07:09.637003    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:07:19.672652    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:07:29.723258    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:07:39.765168    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:07:49.846013    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:07:59.889565    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:08:09.938040    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:08:19.975264    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:08:30.017056    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:08:40.070047    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:08:50.110502    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
W0815 18:09:00.142926    6475 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:09:10.194153    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:09:20.230953    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:09:30.268260    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:09:40.312513    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:09:50.351557    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 18:10:00.399281    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 10 lines ...
Pod	kube-system/calico-kube-controllers-57fd98f9d9-h578n	system-cluster-critical pod "calico-kube-controllers-57fd98f9d9-h578n" is pending
Pod	kube-system/calico-node-g6fq4				system-node-critical pod "calico-node-g6fq4" is not ready (calico-node)
Pod	kube-system/calico-node-qzbhz				system-node-critical pod "calico-node-qzbhz" is pending
Pod	kube-system/coredns-696464bdb8-z52zn			system-cluster-critical pod "coredns-696464bdb8-z52zn" is pending
Pod	kube-system/coredns-autoscaler-6658b4bf85-4pc6n		system-cluster-critical pod "coredns-autoscaler-6658b4bf85-4pc6n" is pending

Validation Failed
W0815 18:10:13.407254    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 19 lines ...
Pod	kube-system/coredns-696464bdb8-z52zn					system-cluster-critical pod "coredns-696464bdb8-z52zn" is pending
Pod	kube-system/coredns-autoscaler-6658b4bf85-4pc6n				system-cluster-critical pod "coredns-autoscaler-6658b4bf85-4pc6n" is pending
Pod	kube-system/kube-proxy-ip-172-20-37-183.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-37-183.eu-west-3.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-39-2.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-39-2.eu-west-3.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-51-10.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-51-10.eu-west-3.compute.internal" is pending

Validation Failed
W0815 18:10:25.432211    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 14 lines ...
Pod	kube-system/calico-node-nn254			system-node-critical pod "calico-node-nn254" is pending
Pod	kube-system/calico-node-qzbhz			system-node-critical pod "calico-node-qzbhz" is pending
Pod	kube-system/calico-node-sr6nl			system-node-critical pod "calico-node-sr6nl" is pending
Pod	kube-system/coredns-696464bdb8-z52zn		system-cluster-critical pod "coredns-696464bdb8-z52zn" is pending
Pod	kube-system/coredns-autoscaler-6658b4bf85-4pc6n	system-cluster-critical pod "coredns-autoscaler-6658b4bf85-4pc6n" is pending

Validation Failed
W0815 18:10:37.396027    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 11 lines ...
Pod	kube-system/calico-node-nn254			system-node-critical pod "calico-node-nn254" is not ready (calico-node)
Pod	kube-system/calico-node-qzbhz			system-node-critical pod "calico-node-qzbhz" is not ready (calico-node)
Pod	kube-system/calico-node-sr6nl			system-node-critical pod "calico-node-sr6nl" is not ready (calico-node)
Pod	kube-system/coredns-696464bdb8-z52zn		system-cluster-critical pod "coredns-696464bdb8-z52zn" is pending
Pod	kube-system/coredns-autoscaler-6658b4bf85-4pc6n	system-cluster-critical pod "coredns-autoscaler-6658b4bf85-4pc6n" is pending

Validation Failed
W0815 18:10:49.408259    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-696464bdb8-2nkf4	system-cluster-critical pod "coredns-696464bdb8-2nkf4" is not ready (coredns)
Pod	kube-system/coredns-696464bdb8-z52zn	system-cluster-critical pod "coredns-696464bdb8-z52zn" is not ready (coredns)

Validation Failed
W0815 18:11:01.355379    6475 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 509 lines ...
evicting pod kube-system/dns-controller-6cdd4f9c8c-hqk6h
evicting pod kube-system/calico-kube-controllers-57fd98f9d9-h578n
I0815 18:15:49.216795    6591 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0815 18:15:54.220876    6591 instancegroups.go:591] Stopping instance "i-0ff9bb164c2e4ea18", node "ip-172-20-49-49.eu-west-3.compute.internal", in group "master-eu-west-3a.masters.e2e-c668105d47-ed761.test-cncf-aws.k8s.io" (this may take a while).
I0815 18:15:54.536551    6591 instancegroups.go:436] waiting for 15s after terminating instance
I0815 18:16:09.540022    6591 instancegroups.go:470] Validating the cluster.
I0815 18:16:39.576746    6591 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.78.41:443: i/o timeout.
I0815 18:17:39.620526    6591 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.78.41:443: i/o timeout.
I0815 18:18:39.685570    6591 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.78.41:443: i/o timeout.
I0815 18:19:39.739218    6591 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.78.41:443: i/o timeout.
I0815 18:20:39.783842    6591 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.78.41:443: i/o timeout.
I0815 18:21:39.840493    6591 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.78.41:443: i/o timeout.
I0815 18:22:39.898149    6591 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c668105d47-ed761.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.78.41:443: i/o timeout.
I0815 18:23:12.794028    6591 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-39-2.eu-west-3.compute.internal" of role "node" is not ready, system-cluster-critical pod "calico-kube-controllers-57fd98f9d9-tbzgl" is not ready (calico-kube-controllers).
I0815 18:23:44.716060    6591 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-57fd98f9d9-tbzgl" is not ready (calico-kube-controllers).
I0815 18:24:16.719717    6591 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0815 18:24:28.802971    6591 instancegroups.go:503] Cluster validated.
I0815 18:24:28.803036    6591 instancegroups.go:470] Validating the cluster.
I0815 18:24:30.487387    6591 instancegroups.go:503] Cluster validated.
... skipping 111 lines ...
I0815 18:39:15.807798    6641 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/80d07e4e-1cc4-11ed-9820-1e463d3c6692"
I0815 18:39:15.813379    6641 app.go:128] ID for this run: "80d07e4e-1cc4-11ed-9820-1e463d3c6692"
I0815 18:39:15.813440    6641 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=v1.21.7 --parallel 25
I0815 18:39:15.836290    6661 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0815 18:39:15.844068    6661 kubectl.go:148] gsutil cp gs://kubernetes-release/release/v1.21.7/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
PreconditionException: 412 The type of authentication token used for this request requires that Uniform Bucket Level Access be enabled.
F0815 18:39:17.923138    6661 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release v1.21.7: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-c668105d47-ed761.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.fdraziAzU --down
I0815 18:39:17.964794    6850 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0815 18:39:17.965929    6850 app.go:61] The files in RunDir shall not be part of Artifacts
I0815 18:39:17.965956    6850 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0815 18:39:17.965996    6850 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/80d07e4e-1cc4-11ed-9820-1e463d3c6692"
... skipping 342 lines ...