This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2 succeeded
Started2022-08-15 10:23
Elapsed12m6s
Revisionmaster

Test Failures


kubetest2 Test 1.99s

exit status 255
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 2 Passed Tests

Error lines from build-log.txt

ERROR: (gcloud.auth.activate-service-account) There was a problem refreshing your current auth tokens: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})
Please run:

  $ gcloud auth login

to obtain new credentials.

... skipping 171 lines ...
I0815 10:24:48.135000    6103 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0815 10:24:48.137547    6103 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-85-g429ebecdca/linux/amd64/kops
I0815 10:24:49.120476    6103 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io/id_ed25519
I0815 10:24:49.127972    6103 up.go:44] Cleaning up any leaked resources from previous cluster
I0815 10:24:49.128072    6103 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops toolbox dump --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0815 10:24:49.128090    6103 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops toolbox dump --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0815 10:24:49.659705    6103 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0815 10:24:49.659757    6103 down.go:48] /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops delete cluster --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io --yes
I0815 10:24:49.659772    6103 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops delete cluster --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io --yes
I0815 10:24:49.692217    6137 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io" not found
I0815 10:24:50.231446    6103 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/15 10:24:50 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0815 10:24:50.243549    6103 http.go:37] curl https://ip.jsb.workers.dev
I0815 10:24:50.333313    6103 up.go:159] /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops create cluster --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.12 --ssh-public-key /tmp/kops/e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220810 --channel=alpha --networking=calico --container-runtime=containerd --discovery-store=s3://k8s-kops-prow/discovery --admin-access 34.135.114.163/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I0815 10:24:50.333551    6103 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops create cluster --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.12 --ssh-public-key /tmp/kops/e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220810 --channel=alpha --networking=calico --container-runtime=containerd --discovery-store=s3://k8s-kops-prow/discovery --admin-access 34.135.114.163/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I0815 10:24:50.366640    6147 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0815 10:24:50.387389    6147 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io/id_ed25519.pub
I0815 10:24:50.850683    6147 new_cluster.go:1168]  Cloud Provider ID = aws
... skipping 11 lines ...
*********************************************************************************

W0815 10:24:53.524228    6147 urls.go:71] Using base url from KOPS_BASE_URL env var: "https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-85-g429ebecdca"
I0815 10:24:54.913633    6147 executor.go:111] Tasks: 0 done / 97 total; 49 can run
W0815 10:24:55.151493    6147 vfs_castore.go:379] CA private key was not found
I0815 10:24:55.653335    6147 executor.go:111] Tasks: 49 done / 97 total; 22 can run
W0815 10:24:55.807311    6147 executor.go:139] error running task "BootstrapScript/master-ca-central-1a" (9m59s remaining to succeed): failed to get keyset from "etcd-manager-ca-events"
I0815 10:24:55.807608    6147 executor.go:111] Tasks: 70 done / 97 total; 23 can run
I0815 10:24:55.932475    6147 executor.go:111] Tasks: 93 done / 97 total; 3 can run
I0815 10:24:56.169191    6147 executor.go:111] Tasks: 96 done / 97 total; 1 can run
I0815 10:24:56.267512    6147 executor.go:111] Tasks: 97 done / 97 total; 0 can run
Will create resources:
  AutoscalingGroup/master-ca-central-1a.masters.e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io
... skipping 537 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:25:32.149702    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:25:42.200004    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:25:52.237028    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:26:02.277488    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:26:12.324552    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:26:22.358493    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:26:32.397049    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:26:42.437375    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:26:52.473546    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:27:02.508075    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:27:12.550159    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:27:22.593480    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:27:32.629046    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:27:42.663443    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:27:52.714845    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:28:02.751522    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:28:12.787130    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:28:22.828257    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:28:32.878065    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0815 10:28:42.917106    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 9 lines ...
Machine	i-06de50d44fc5c2fe6				machine "i-06de50d44fc5c2fe6" has not yet joined cluster
Pod	kube-system/coredns-5fcc7b6498-9hlrx		system-cluster-critical pod "coredns-5fcc7b6498-9hlrx" is pending
Pod	kube-system/coredns-autoscaler-6658b4bf85-npwzt	system-cluster-critical pod "coredns-autoscaler-6658b4bf85-npwzt" is pending
Pod	kube-system/ebs-csi-controller-65ddb8876b-csr7s	system-cluster-critical pod "ebs-csi-controller-65ddb8876b-csr7s" is pending
Pod	kube-system/ebs-csi-controller-65ddb8876b-dtg55	system-cluster-critical pod "ebs-csi-controller-65ddb8876b-dtg55" is pending

Validation Failed
W0815 10:28:54.257004    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 18 lines ...
Pod	kube-system/ebs-csi-controller-65ddb8876b-csr7s	system-cluster-critical pod "ebs-csi-controller-65ddb8876b-csr7s" is pending
Pod	kube-system/ebs-csi-controller-65ddb8876b-dtg55	system-cluster-critical pod "ebs-csi-controller-65ddb8876b-dtg55" is pending
Pod	kube-system/ebs-csi-node-9xvdj			system-node-critical pod "ebs-csi-node-9xvdj" is pending
Pod	kube-system/ebs-csi-node-gpg2k			system-node-critical pod "ebs-csi-node-gpg2k" is pending
Pod	kube-system/ebs-csi-node-xtk8w			system-node-critical pod "ebs-csi-node-xtk8w" is pending

Validation Failed
W0815 10:29:05.240442    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 21 lines ...
Pod	kube-system/ebs-csi-controller-65ddb8876b-dtg55	system-cluster-critical pod "ebs-csi-controller-65ddb8876b-dtg55" is pending
Pod	kube-system/ebs-csi-node-9xvdj			system-node-critical pod "ebs-csi-node-9xvdj" is pending
Pod	kube-system/ebs-csi-node-gpg2k			system-node-critical pod "ebs-csi-node-gpg2k" is pending
Pod	kube-system/ebs-csi-node-v9hgx			system-node-critical pod "ebs-csi-node-v9hgx" is pending
Pod	kube-system/ebs-csi-node-xtk8w			system-node-critical pod "ebs-csi-node-xtk8w" is pending

Validation Failed
W0815 10:29:16.191688    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 16 lines ...
Pod	kube-system/ebs-csi-controller-65ddb8876b-dtg55	system-cluster-critical pod "ebs-csi-controller-65ddb8876b-dtg55" is pending
Pod	kube-system/ebs-csi-node-9xvdj			system-node-critical pod "ebs-csi-node-9xvdj" is pending
Pod	kube-system/ebs-csi-node-gpg2k			system-node-critical pod "ebs-csi-node-gpg2k" is pending
Pod	kube-system/ebs-csi-node-v9hgx			system-node-critical pod "ebs-csi-node-v9hgx" is pending
Pod	kube-system/ebs-csi-node-xtk8w			system-node-critical pod "ebs-csi-node-xtk8w" is pending

Validation Failed
W0815 10:29:27.145575    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 11 lines ...
Pod	kube-system/calico-node-dtdm6			system-node-critical pod "calico-node-dtdm6" is not ready (calico-node)
Pod	kube-system/calico-node-rjdsx			system-node-critical pod "calico-node-rjdsx" is not ready (calico-node)
Pod	kube-system/calico-node-sk4mg			system-node-critical pod "calico-node-sk4mg" is not ready (calico-node)
Pod	kube-system/ebs-csi-controller-65ddb8876b-csr7s	system-cluster-critical pod "ebs-csi-controller-65ddb8876b-csr7s" is not ready (ebs-plugin)
Pod	kube-system/ebs-csi-node-v9hgx			system-node-critical pod "ebs-csi-node-v9hgx" is pending

Validation Failed
W0815 10:29:38.273275    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 21 lines ...
ip-172-20-61-64.ca-central-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-61-64.ca-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-61-64.ca-central-1.compute.internal" is pending

Validation Failed
W0815 10:30:00.221938    6187 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 141 lines ...
ip-172-20-61-64.ca-central-1.compute.internal	node	True

Your cluster e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io is ready
I0815 10:31:49.664874    6103 up.go:105] cluster reported as up
I0815 10:31:49.664949    6103 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --ginkgo-args=--debug --test-args=-test.timeout=60m -num-nodes=0 --test-package-marker=stable-1.22.txt --parallel=25
I0815 10:31:49.684838    6197 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
F0815 10:31:51.652289    6197 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to get latest release name: exit status 1
I0815 10:31:51.656403    6103 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops toolbox dump --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0815 10:31:51.656448    6103 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops toolbox dump --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0815 10:32:14.647212    6103 dumplogs.go:78] /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops get cluster --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io -o yaml
I0815 10:32:14.647272    6103 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops get cluster --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io -o yaml
I0815 10:32:15.161801    6103 dumplogs.go:78] /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops get instancegroups --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io -o yaml
I0815 10:32:15.161847    6103 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/3ddc0761-1c84-11ed-9820-1e463d3c6692/kops get instancegroups --name e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io -o yaml
... skipping 477 lines ...
route-table:rtb-0618f81495694ff26	ok
vpc:vpc-0da3e3cd9081e7274	ok
dhcp-options:dopt-01e8789f67731bb02	ok
Deleted kubectl config for e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io

Deleted cluster: "e2e-e2e-kops-aws-k8s-1-22.test-cncf-aws.k8s.io"
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace