This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-11 04:59
Elapsed36m47s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 128 lines ...
I1011 04:59:45.158581    4724 http.go:37] curl https://storage.googleapis.com/kops-ci/markers/release-1.22/latest-ci-updown-green.txt
I1011 04:59:45.160761    4724 http.go:37] curl https://storage.googleapis.com/k8s-staging-kops/kops/releases/1.22.0-beta.3+v1.22.0-beta.2-34-gab6c4fd5b0/linux/amd64/kops
I1011 04:59:45.945316    4724 up.go:43] Cleaning up any leaked resources from previous cluster
I1011 04:59:45.945453    4724 dumplogs.go:40] /logs/artifacts/dfc3169d-2a4f-11ec-b781-c649eef4635a/kops toolbox dump --name e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user
I1011 04:59:45.958886    4745 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1011 04:59:45.958978    4745 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io" not found
W1011 04:59:46.412943    4724 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1011 04:59:46.412998    4724 down.go:48] /logs/artifacts/dfc3169d-2a4f-11ec-b781-c649eef4635a/kops delete cluster --name e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --yes
I1011 04:59:46.427816    4755 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1011 04:59:46.427925    4755 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io" not found
I1011 04:59:46.877376    4724 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/11 04:59:46 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1011 04:59:46.885352    4724 http.go:37] curl https://ip.jsb.workers.dev
I1011 04:59:46.961349    4724 up.go:144] /logs/artifacts/dfc3169d-2a4f-11ec-b781-c649eef4635a/kops create cluster --name e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.5 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=309956199498/RHEL-8.4.0_HVM-20210825-x86_64-0-Hourly2-GP2 --channel=alpha --networking=kubenet --container-runtime=containerd --admin-access 34.123.84.185/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-south-1a --master-size c5.large
I1011 04:59:46.974888    4765 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1011 04:59:46.975134    4765 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1011 04:59:46.998146    4765 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I1011 04:59:47.498538    4765 new_cluster.go:1055]  Cloud Provider ID = aws
... skipping 40 lines ...

I1011 05:00:09.316890    4724 up.go:181] /logs/artifacts/dfc3169d-2a4f-11ec-b781-c649eef4635a/kops validate cluster --name e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1011 05:00:09.341901    4785 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1011 05:00:09.342015    4785 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io

W1011 05:00:11.196645    4785 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:00:21.231446    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
W1011 05:00:31.262696    4785 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:00:41.297718    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:00:51.343269    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:01:01.388921    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:01:11.428469    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:01:21.464457    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:01:31.497274    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:01:41.547100    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:01:51.590412    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:02:01.634620    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:02:11.665496    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:02:21.721637    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:02:31.763077    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:02:41.796393    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:02:51.850143    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:03:01.901489    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:03:11.934849    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:03:21.981597    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:03:32.021127    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:03:42.054482    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:03:52.093832    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:04:02.126050    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:04:12.173233    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:04:22.205804    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:04:32.243624    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:04:42.277000    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:04:52.308826    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:05:02.344227    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:05:12.407126    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:05:22.437936    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1011 05:05:32.467860    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 7 lines ...
Machine	i-0b1bd823a638cd80d				machine "i-0b1bd823a638cd80d" has not yet joined cluster
Machine	i-0d99681aaa2650ee3				machine "i-0d99681aaa2650ee3" has not yet joined cluster
Machine	i-0fa79d1cf19aa88af				machine "i-0fa79d1cf19aa88af" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-dv6ss		system-cluster-critical pod "coredns-5dc785954d-dv6ss" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-vwtvb	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-vwtvb" is pending

Validation Failed
W1011 05:05:47.835376    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 8 lines ...
Machine	i-0d99681aaa2650ee3				machine "i-0d99681aaa2650ee3" has not yet joined cluster
Machine	i-0fa79d1cf19aa88af				machine "i-0fa79d1cf19aa88af" has not yet joined cluster
Node	ip-172-20-45-252.ap-south-1.compute.internal	node "ip-172-20-45-252.ap-south-1.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-dv6ss		system-cluster-critical pod "coredns-5dc785954d-dv6ss" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-vwtvb	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-vwtvb" is pending

Validation Failed
W1011 05:06:01.388824    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 9 lines ...
KIND	NAME									MESSAGE
Node	ip-172-20-33-34.ap-south-1.compute.internal				node "ip-172-20-33-34.ap-south-1.compute.internal" of role "node" is not ready
Node	ip-172-20-43-95.ap-south-1.compute.internal				node "ip-172-20-43-95.ap-south-1.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-5cr6n					system-cluster-critical pod "coredns-5dc785954d-5cr6n" is not ready (coredns)
Pod	kube-system/kube-proxy-ip-172-20-42-144.ap-south-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-42-144.ap-south-1.compute.internal" is pending

Validation Failed
W1011 05:06:15.123942    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 6 lines ...
ip-172-20-45-252.ap-south-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal" is pending

Validation Failed
W1011 05:06:28.627843    4785 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 967 lines ...
STEP: Destroying namespace "node-problem-detector-7616" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [2.177 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 68 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:09:15.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4358" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:15.929: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 45 lines ...
STEP: Creating a kubernetes client
Oct 11 05:09:11.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
W1011 05:09:13.955885    5397 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 11 05:09:13.956: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:253
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:09:16.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-723" for this suite.


• [SLOW TEST:5.727 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:253
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":1,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
W1011 05:09:12.899762    5451 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 11 05:09:12.899: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Oct 11 05:09:13.618: INFO: Waiting up to 5m0s for pod "security-context-2e622674-b24f-492b-a295-e8bfc54faed5" in namespace "security-context-1349" to be "Succeeded or Failed"
Oct 11 05:09:13.856: INFO: Pod "security-context-2e622674-b24f-492b-a295-e8bfc54faed5": Phase="Pending", Reason="", readiness=false. Elapsed: 237.402514ms
Oct 11 05:09:16.094: INFO: Pod "security-context-2e622674-b24f-492b-a295-e8bfc54faed5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475599154s
Oct 11 05:09:18.332: INFO: Pod "security-context-2e622674-b24f-492b-a295-e8bfc54faed5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.713878121s
STEP: Saw pod success
Oct 11 05:09:18.332: INFO: Pod "security-context-2e622674-b24f-492b-a295-e8bfc54faed5" satisfied condition "Succeeded or Failed"
Oct 11 05:09:18.570: INFO: Trying to get logs from node ip-172-20-43-95.ap-south-1.compute.internal pod security-context-2e622674-b24f-492b-a295-e8bfc54faed5 container test-container: <nil>
STEP: delete the pod
Oct 11 05:09:19.064: INFO: Waiting for pod security-context-2e622674-b24f-492b-a295-e8bfc54faed5 to disappear
Oct 11 05:09:19.303: INFO: Pod security-context-2e622674-b24f-492b-a295-e8bfc54faed5 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.084 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-instrumentation] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:09:18.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:09:20.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2235" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:21.405: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 33 lines ...
STEP: Destroying namespace "apply-6793" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•S
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 11 05:09:13.546: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c86238ea-3ad4-4992-87b2-3e992393f944" in namespace "projected-1169" to be "Succeeded or Failed"
Oct 11 05:09:13.781: INFO: Pod "downwardapi-volume-c86238ea-3ad4-4992-87b2-3e992393f944": Phase="Pending", Reason="", readiness=false. Elapsed: 235.314695ms
Oct 11 05:09:16.017: INFO: Pod "downwardapi-volume-c86238ea-3ad4-4992-87b2-3e992393f944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471022345s
Oct 11 05:09:18.253: INFO: Pod "downwardapi-volume-c86238ea-3ad4-4992-87b2-3e992393f944": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707000972s
Oct 11 05:09:20.489: INFO: Pod "downwardapi-volume-c86238ea-3ad4-4992-87b2-3e992393f944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.94261327s
STEP: Saw pod success
Oct 11 05:09:20.489: INFO: Pod "downwardapi-volume-c86238ea-3ad4-4992-87b2-3e992393f944" satisfied condition "Succeeded or Failed"
Oct 11 05:09:20.723: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod downwardapi-volume-c86238ea-3ad4-4992-87b2-3e992393f944 container client-container: <nil>
STEP: delete the pod
Oct 11 05:09:21.212: INFO: Waiting for pod downwardapi-volume-c86238ea-3ad4-4992-87b2-3e992393f944 to disappear
Oct 11 05:09:21.446: INFO: Pod downwardapi-volume-c86238ea-3ad4-4992-87b2-3e992393f944 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.258 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:22.162: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
W1011 05:09:12.890517    5462 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 11 05:09:12.890: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 11 05:09:13.618: INFO: Waiting up to 5m0s for pod "pod-ebbab149-d5c8-4f3a-8361-8429d57fd471" in namespace "emptydir-5486" to be "Succeeded or Failed"
Oct 11 05:09:13.858: INFO: Pod "pod-ebbab149-d5c8-4f3a-8361-8429d57fd471": Phase="Pending", Reason="", readiness=false. Elapsed: 240.093263ms
Oct 11 05:09:16.098: INFO: Pod "pod-ebbab149-d5c8-4f3a-8361-8429d57fd471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480591814s
Oct 11 05:09:18.340: INFO: Pod "pod-ebbab149-d5c8-4f3a-8361-8429d57fd471": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722448401s
Oct 11 05:09:20.583: INFO: Pod "pod-ebbab149-d5c8-4f3a-8361-8429d57fd471": Phase="Pending", Reason="", readiness=false. Elapsed: 6.965092089s
Oct 11 05:09:22.823: INFO: Pod "pod-ebbab149-d5c8-4f3a-8361-8429d57fd471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.205354146s
STEP: Saw pod success
Oct 11 05:09:22.823: INFO: Pod "pod-ebbab149-d5c8-4f3a-8361-8429d57fd471" satisfied condition "Succeeded or Failed"
Oct 11 05:09:23.064: INFO: Trying to get logs from node ip-172-20-43-95.ap-south-1.compute.internal pod pod-ebbab149-d5c8-4f3a-8361-8429d57fd471 container test-container: <nil>
STEP: delete the pod
Oct 11 05:09:23.580: INFO: Waiting for pod pod-ebbab149-d5c8-4f3a-8361-8429d57fd471 to disappear
Oct 11 05:09:23.820: INFO: Pod pod-ebbab149-d5c8-4f3a-8361-8429d57fd471 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 33 lines ...
• [SLOW TEST:14.797 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:481
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1027
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:32.186: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:37.314: INFO: Only supported for providers [gce gke] (not aws)
... skipping 37 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-api-machinery] Generated clientset
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:09:16.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename clientset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:23.437 seconds]
[sig-api-machinery] Generated clientset
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:105
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:39.499: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 142 lines ...
• [SLOW TEST:19.037 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Deployment should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:40.523: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 126 lines ...
• [SLOW TEST:28.890 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:40.797: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 83 lines ...
• [SLOW TEST:26.694 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:42.660: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:51.911: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:09:39.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:09:54.159: INFO: Only supported for providers [vsphere] (not aws)
... skipping 64 lines ...
• [SLOW TEST:14.420 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:531
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":3,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Oct 11 05:09:40.184: INFO: PersistentVolumeClaim pvc-hvwlm found but phase is Pending instead of Bound.
Oct 11 05:09:42.429: INFO: PersistentVolumeClaim pvc-hvwlm found and phase=Bound (15.967079362s)
Oct 11 05:09:42.429: INFO: Waiting up to 3m0s for PersistentVolume local-5jq6h to have phase Bound
Oct 11 05:09:42.674: INFO: PersistentVolume local-5jq6h found and phase=Bound (244.716814ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-llwz
STEP: Creating a pod to test subpath
Oct 11 05:09:43.412: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-llwz" in namespace "provisioning-470" to be "Succeeded or Failed"
Oct 11 05:09:43.657: INFO: Pod "pod-subpath-test-preprovisionedpv-llwz": Phase="Pending", Reason="", readiness=false. Elapsed: 244.796966ms
Oct 11 05:09:45.904: INFO: Pod "pod-subpath-test-preprovisionedpv-llwz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492152297s
Oct 11 05:09:48.150: INFO: Pod "pod-subpath-test-preprovisionedpv-llwz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.73839461s
Oct 11 05:09:50.395: INFO: Pod "pod-subpath-test-preprovisionedpv-llwz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.983574421s
Oct 11 05:09:52.641: INFO: Pod "pod-subpath-test-preprovisionedpv-llwz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.229630222s
STEP: Saw pod success
Oct 11 05:09:52.642: INFO: Pod "pod-subpath-test-preprovisionedpv-llwz" satisfied condition "Succeeded or Failed"
Oct 11 05:09:52.887: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-llwz container test-container-subpath-preprovisionedpv-llwz: <nil>
STEP: delete the pod
Oct 11 05:09:53.391: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-llwz to disappear
Oct 11 05:09:53.635: INFO: Pod pod-subpath-test-preprovisionedpv-llwz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-llwz
Oct 11 05:09:53.636: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-llwz" in namespace "provisioning-470"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
Oct 11 05:09:44.346: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:10:01.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2457" for this suite.
STEP: Destroying namespace "webhook-2457-markers" for this suite.
... skipping 4 lines ...
• [SLOW TEST:25.670 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:03.509: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
Oct 11 05:09:40.153: INFO: PersistentVolumeClaim pvc-k44q7 found but phase is Pending instead of Bound.
Oct 11 05:09:42.389: INFO: PersistentVolumeClaim pvc-k44q7 found and phase=Bound (13.657137754s)
Oct 11 05:09:42.389: INFO: Waiting up to 3m0s for PersistentVolume local-pfnhf to have phase Bound
Oct 11 05:09:42.625: INFO: PersistentVolume local-pfnhf found and phase=Bound (235.660965ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-lwgt
STEP: Creating a pod to test exec-volume-test
Oct 11 05:09:43.342: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-lwgt" in namespace "volume-5010" to be "Succeeded or Failed"
Oct 11 05:09:43.578: INFO: Pod "exec-volume-test-preprovisionedpv-lwgt": Phase="Pending", Reason="", readiness=false. Elapsed: 235.737065ms
Oct 11 05:09:45.815: INFO: Pod "exec-volume-test-preprovisionedpv-lwgt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473293056s
Oct 11 05:09:48.052: INFO: Pod "exec-volume-test-preprovisionedpv-lwgt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.710277989s
Oct 11 05:09:50.289: INFO: Pod "exec-volume-test-preprovisionedpv-lwgt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.947099661s
Oct 11 05:09:52.525: INFO: Pod "exec-volume-test-preprovisionedpv-lwgt": Phase="Pending", Reason="", readiness=false. Elapsed: 9.183207434s
Oct 11 05:09:54.762: INFO: Pod "exec-volume-test-preprovisionedpv-lwgt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.420308597s
Oct 11 05:09:57.000: INFO: Pod "exec-volume-test-preprovisionedpv-lwgt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.65802616s
STEP: Saw pod success
Oct 11 05:09:57.000: INFO: Pod "exec-volume-test-preprovisionedpv-lwgt" satisfied condition "Succeeded or Failed"
Oct 11 05:09:57.236: INFO: Trying to get logs from node ip-172-20-43-95.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-lwgt container exec-container-preprovisionedpv-lwgt: <nil>
STEP: delete the pod
Oct 11 05:09:57.719: INFO: Waiting for pod exec-volume-test-preprovisionedpv-lwgt to disappear
Oct 11 05:09:57.955: INFO: Pod exec-volume-test-preprovisionedpv-lwgt no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-lwgt
Oct 11 05:09:57.955: INFO: Deleting pod "exec-volume-test-preprovisionedpv-lwgt" in namespace "volume-5010"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:10:03.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap that has name configmap-test-emptyKey-23283627-43ac-4b13-aeb1-c7c7327f0167
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:10:04.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8436" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 48 lines ...
Oct 11 05:09:53.456: INFO: PersistentVolumeClaim pvc-mhn7s found but phase is Pending instead of Bound.
Oct 11 05:09:55.695: INFO: PersistentVolumeClaim pvc-mhn7s found and phase=Bound (6.956135842s)
Oct 11 05:09:55.695: INFO: Waiting up to 3m0s for PersistentVolume local-sqddw to have phase Bound
Oct 11 05:09:55.933: INFO: PersistentVolume local-sqddw found and phase=Bound (237.469514ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bqcd
STEP: Creating a pod to test subpath
Oct 11 05:09:56.647: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bqcd" in namespace "provisioning-9454" to be "Succeeded or Failed"
Oct 11 05:09:56.886: INFO: Pod "pod-subpath-test-preprovisionedpv-bqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 238.613015ms
Oct 11 05:09:59.129: INFO: Pod "pod-subpath-test-preprovisionedpv-bqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482389719s
Oct 11 05:10:01.368: INFO: Pod "pod-subpath-test-preprovisionedpv-bqcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.721144284s
STEP: Saw pod success
Oct 11 05:10:01.368: INFO: Pod "pod-subpath-test-preprovisionedpv-bqcd" satisfied condition "Succeeded or Failed"
Oct 11 05:10:01.608: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-bqcd container test-container-subpath-preprovisionedpv-bqcd: <nil>
STEP: delete the pod
Oct 11 05:10:02.093: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bqcd to disappear
Oct 11 05:10:02.343: INFO: Pod pod-subpath-test-preprovisionedpv-bqcd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bqcd
Oct 11 05:10:02.343: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bqcd" in namespace "provisioning-9454"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":32,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:05.609: INFO: Only supported for providers [openstack] (not aws)
... skipping 79 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":2,"skipped":0,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:09:54.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:09.208: INFO: Only supported for providers [vsphere] (not aws)
... skipping 72 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:09:20.027: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Oct 11 05:09:21.216: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6471qvl9g
STEP: creating a claim
Oct 11 05:09:21.454: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-hvkc
STEP: Creating a pod to test subpath
Oct 11 05:09:22.178: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hvkc" in namespace "provisioning-6471" to be "Succeeded or Failed"
Oct 11 05:09:22.419: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 240.708395ms
Oct 11 05:09:24.657: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478975623s
Oct 11 05:09:26.896: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.717537282s
Oct 11 05:09:29.135: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956460583s
Oct 11 05:09:31.374: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.195715915s
Oct 11 05:09:33.613: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.435048184s
Oct 11 05:09:35.851: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.672697404s
Oct 11 05:09:38.089: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.910550957s
Oct 11 05:09:40.350: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.171136148s
Oct 11 05:09:42.588: INFO: Pod "pod-subpath-test-dynamicpv-hvkc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.4092727s
STEP: Saw pod success
Oct 11 05:09:42.588: INFO: Pod "pod-subpath-test-dynamicpv-hvkc" satisfied condition "Succeeded or Failed"
Oct 11 05:09:42.826: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-hvkc container test-container-subpath-dynamicpv-hvkc: <nil>
STEP: delete the pod
Oct 11 05:09:43.312: INFO: Waiting for pod pod-subpath-test-dynamicpv-hvkc to disappear
Oct 11 05:09:43.550: INFO: Pod pod-subpath-test-dynamicpv-hvkc no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hvkc
Oct 11 05:09:43.550: INFO: Deleting pod "pod-subpath-test-dynamicpv-hvkc" in namespace "provisioning-6471"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:11.904: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":1,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:12.537: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
Oct 11 05:09:27.344: INFO: PersistentVolumeClaim pvc-tj7hx found and phase=Bound (245.383605ms)
Oct 11 05:09:27.344: INFO: Waiting up to 3m0s for PersistentVolume nfs-r2khc to have phase Bound
Oct 11 05:09:27.589: INFO: PersistentVolume nfs-r2khc found and phase=Bound (245.070934ms)
[It] should test that a PV becomes Available and is clean after the PVC is deleted.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
STEP: Writing to the volume.
Oct 11 05:09:28.326: INFO: Waiting up to 5m0s for pod "pvc-tester-n9mcn" in namespace "pv-5840" to be "Succeeded or Failed"
Oct 11 05:09:28.571: INFO: Pod "pvc-tester-n9mcn": Phase="Pending", Reason="", readiness=false. Elapsed: 244.895745ms
Oct 11 05:09:30.817: INFO: Pod "pvc-tester-n9mcn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.490868147s
STEP: Saw pod success
Oct 11 05:09:30.817: INFO: Pod "pvc-tester-n9mcn" satisfied condition "Succeeded or Failed"
STEP: Deleting the claim
Oct 11 05:09:30.817: INFO: Deleting pod "pvc-tester-n9mcn" in namespace "pv-5840"
Oct 11 05:09:31.076: INFO: Wait up to 5m0s for pod "pvc-tester-n9mcn" to be fully deleted
Oct 11 05:09:31.321: INFO: Deleting PVC pvc-tj7hx to trigger reclamation of PV 
Oct 11 05:09:31.321: INFO: Deleting PersistentVolumeClaim "pvc-tj7hx"
Oct 11 05:09:31.567: INFO: Waiting for reclaim process to complete.
... skipping 9 lines ...
Oct 11 05:09:49.807: INFO: PersistentVolume nfs-r2khc found and phase=Available (18.239283886s)
Oct 11 05:09:50.051: INFO: PV nfs-r2khc now in "Available" phase
STEP: Re-mounting the volume.
Oct 11 05:09:50.299: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-qcwls] to have phase Bound
Oct 11 05:09:50.545: INFO: PersistentVolumeClaim pvc-qcwls found and phase=Bound (244.987655ms)
STEP: Verifying the mount has been cleaned.
Oct 11 05:09:50.791: INFO: Waiting up to 5m0s for pod "pvc-tester-zkj47" in namespace "pv-5840" to be "Succeeded or Failed"
Oct 11 05:09:51.036: INFO: Pod "pvc-tester-zkj47": Phase="Pending", Reason="", readiness=false. Elapsed: 245.415404ms
Oct 11 05:09:53.282: INFO: Pod "pvc-tester-zkj47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.491547536s
STEP: Saw pod success
Oct 11 05:09:53.283: INFO: Pod "pvc-tester-zkj47" satisfied condition "Succeeded or Failed"
Oct 11 05:09:53.283: INFO: Deleting pod "pvc-tester-zkj47" in namespace "pv-5840"
Oct 11 05:09:53.532: INFO: Wait up to 5m0s for pod "pvc-tester-zkj47" to be fully deleted
Oct 11 05:09:53.777: INFO: Pod exited without failure; the volume has been recycled.
Oct 11 05:09:53.777: INFO: Removing second PVC, waiting for the recycler to finish before cleanup.
Oct 11 05:09:53.777: INFO: Deleting PVC pvc-qcwls to trigger reclamation of PV 
Oct 11 05:09:53.777: INFO: Deleting PersistentVolumeClaim "pvc-qcwls"
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    when invoking the Recycle reclaim policy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:265
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:15.260 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:13.737: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
STEP: Destroying namespace "apply-4986" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:14.736: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 46 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 71 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:15.146: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:10:15.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-1222" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:15.730: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 86 lines ...
Oct 11 05:09:45.792: INFO: PersistentVolume nfs-kk8lk found and phase=Bound (244.701315ms)
Oct 11 05:09:46.041: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cjn7w] to have phase Bound
Oct 11 05:09:46.286: INFO: PersistentVolumeClaim pvc-cjn7w found and phase=Bound (244.943426ms)
STEP: Checking pod has write access to PersistentVolumes
Oct 11 05:09:46.531: INFO: Creating nfs test pod
Oct 11 05:09:46.777: INFO: Pod should terminate with exitcode 0 (success)
Oct 11 05:09:46.777: INFO: Waiting up to 5m0s for pod "pvc-tester-d9qkk" in namespace "pv-2430" to be "Succeeded or Failed"
Oct 11 05:09:47.022: INFO: Pod "pvc-tester-d9qkk": Phase="Pending", Reason="", readiness=false. Elapsed: 244.770515ms
Oct 11 05:09:49.271: INFO: Pod "pvc-tester-d9qkk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493428207s
Oct 11 05:09:51.516: INFO: Pod "pvc-tester-d9qkk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.739052276s
Oct 11 05:09:53.761: INFO: Pod "pvc-tester-d9qkk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.984156821s
Oct 11 05:09:56.007: INFO: Pod "pvc-tester-d9qkk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.230126664s
STEP: Saw pod success
Oct 11 05:09:56.008: INFO: Pod "pvc-tester-d9qkk" satisfied condition "Succeeded or Failed"
Oct 11 05:09:56.008: INFO: Pod pvc-tester-d9qkk succeeded 
Oct 11 05:09:56.008: INFO: Deleting pod "pvc-tester-d9qkk" in namespace "pv-2430"
Oct 11 05:09:56.256: INFO: Wait up to 5m0s for pod "pvc-tester-d9qkk" to be fully deleted
Oct 11 05:09:56.746: INFO: Creating nfs test pod
Oct 11 05:09:56.996: INFO: Pod should terminate with exitcode 0 (success)
Oct 11 05:09:56.996: INFO: Waiting up to 5m0s for pod "pvc-tester-9wrxr" in namespace "pv-2430" to be "Succeeded or Failed"
Oct 11 05:09:57.243: INFO: Pod "pvc-tester-9wrxr": Phase="Pending", Reason="", readiness=false. Elapsed: 246.342545ms
Oct 11 05:09:59.489: INFO: Pod "pvc-tester-9wrxr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492678339s
Oct 11 05:10:01.735: INFO: Pod "pvc-tester-9wrxr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.739068523s
STEP: Saw pod success
Oct 11 05:10:01.735: INFO: Pod "pvc-tester-9wrxr" satisfied condition "Succeeded or Failed"
Oct 11 05:10:01.735: INFO: Pod pvc-tester-9wrxr succeeded 
Oct 11 05:10:01.735: INFO: Deleting pod "pvc-tester-9wrxr" in namespace "pv-2430"
Oct 11 05:10:01.991: INFO: Wait up to 5m0s for pod "pvc-tester-9wrxr" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Oct 11 05:10:03.220: INFO: Deleting PVC pvc-gfldh to trigger reclamation of PV nfs-xgtlc
Oct 11 05:10:03.220: INFO: Deleting PersistentVolumeClaim "pvc-gfldh"
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:17.416: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 131 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":5,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:19.640: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:20.630: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 60 lines ...
Oct 11 05:09:55.786: INFO: >>> kubeConfig: /root/.kube/config
Oct 11 05:10:02.366: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323
STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323
Oct 11 05:10:02.366: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.20.33.34 http://127.0.0.1:54323/hostname] Namespace:hostport-9631 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 11 05:10:02.366: INFO: >>> kubeConfig: /root/.kube/config
Oct 11 05:10:08.876: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323
Oct 11 05:10:08.876: FAIL: Failed to connect to exposed host ports

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00252c480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00252c480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
... skipping 222 lines ...
• Failure [68.659 seconds]
[sig-network] HostPort
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 11 05:10:08.876: Failed to connect to exposed host ports

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
SSS
------------------------------
{"msg":"FAILED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":0,"skipped":14,"failed":1,"failures":["[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:20.645: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 8 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:22.507: INFO: Only supported for providers [gce gke] (not aws)
... skipping 27 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:10:22.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:23.246: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 111 lines ...
• [SLOW TEST:44.821 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 39 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:9.877 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:29.560: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
• [SLOW TEST:9.260 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:10:25.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Oct 11 05:10:27.119: INFO: Waiting up to 5m0s for pod "client-containers-009d0d2d-1abd-414e-a61e-85a603594e7e" in namespace "containers-704" to be "Succeeded or Failed"
Oct 11 05:10:27.359: INFO: Pod "client-containers-009d0d2d-1abd-414e-a61e-85a603594e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 239.834455ms
Oct 11 05:10:29.595: INFO: Pod "client-containers-009d0d2d-1abd-414e-a61e-85a603594e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.475896303s
STEP: Saw pod success
Oct 11 05:10:29.595: INFO: Pod "client-containers-009d0d2d-1abd-414e-a61e-85a603594e7e" satisfied condition "Succeeded or Failed"
Oct 11 05:10:29.836: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod client-containers-009d0d2d-1abd-414e-a61e-85a603594e7e container agnhost-container: <nil>
STEP: delete the pod
Oct 11 05:10:30.311: INFO: Waiting for pod client-containers-009d0d2d-1abd-414e-a61e-85a603594e7e to disappear
Oct 11 05:10:30.548: INFO: Pod client-containers-009d0d2d-1abd-414e-a61e-85a603594e7e no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.359 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 431 lines ...
• [SLOW TEST:21.704 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:34.973: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 58 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:09:24.553: INFO: >>> kubeConfig: /root/.kube/config
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:35.410: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
Oct 11 05:10:29.190: INFO: Creating a PV followed by a PVC
Oct 11 05:10:29.668: INFO: Waiting for PV local-pv4l2lk to bind to PVC pvc-7l6xw
Oct 11 05:10:29.668: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7l6xw] to have phase Bound
Oct 11 05:10:29.904: INFO: PersistentVolumeClaim pvc-7l6xw found and phase=Bound (236.739815ms)
Oct 11 05:10:29.904: INFO: Waiting up to 3m0s for PersistentVolume local-pv4l2lk to have phase Bound
Oct 11 05:10:30.140: INFO: PersistentVolume local-pv4l2lk found and phase=Bound (235.997346ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
STEP: Initializing test volumes
Oct 11 05:10:30.613: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1e944ed6-b4f6-4f93-89c7-b75dbe1f62b3] Namespace:persistent-local-volumes-test-506 PodName:hostexec-ip-172-20-33-34.ap-south-1.compute.internal-756n7 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 11 05:10:30.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:13.457 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:42.896: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 124 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Oct 11 05:10:21.849: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 11 05:10:22.085: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-47fc
STEP: Creating a pod to test atomic-volume-subpath
Oct 11 05:10:22.323: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-47fc" in namespace "provisioning-7829" to be "Succeeded or Failed"
Oct 11 05:10:22.558: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Pending", Reason="", readiness=false. Elapsed: 234.806066ms
Oct 11 05:10:24.801: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478397533s
Oct 11 05:10:27.044: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.721665141s
Oct 11 05:10:29.303: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.979693346s
Oct 11 05:10:31.544: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.221062775s
Oct 11 05:10:33.780: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.457338792s
Oct 11 05:10:36.016: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Running", Reason="", readiness=true. Elapsed: 13.69311047s
Oct 11 05:10:38.252: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Running", Reason="", readiness=true. Elapsed: 15.929668117s
Oct 11 05:10:40.490: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Running", Reason="", readiness=true. Elapsed: 18.167498495s
Oct 11 05:10:42.729: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Running", Reason="", readiness=true. Elapsed: 20.406578986s
Oct 11 05:10:44.964: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Running", Reason="", readiness=true. Elapsed: 22.641670246s
Oct 11 05:10:47.201: INFO: Pod "pod-subpath-test-inlinevolume-47fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.878414204s
STEP: Saw pod success
Oct 11 05:10:47.201: INFO: Pod "pod-subpath-test-inlinevolume-47fc" satisfied condition "Succeeded or Failed"
Oct 11 05:10:47.437: INFO: Trying to get logs from node ip-172-20-42-144.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-47fc container test-container-subpath-inlinevolume-47fc: <nil>
STEP: delete the pod
Oct 11 05:10:47.923: INFO: Waiting for pod pod-subpath-test-inlinevolume-47fc to disappear
Oct 11 05:10:48.158: INFO: Pod pod-subpath-test-inlinevolume-47fc no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-47fc
Oct 11 05:10:48.158: INFO: Deleting pod "pod-subpath-test-inlinevolume-47fc" in namespace "provisioning-7829"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":17,"failed":1,"failures":["[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:49.111: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 143 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:51.772: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 44 lines ...
Oct 11 05:10:47.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 11 05:10:48.801: INFO: Waiting up to 5m0s for pod "pod-84ead519-1c39-4da8-b4f9-bce9194b0a7c" in namespace "emptydir-5372" to be "Succeeded or Failed"
Oct 11 05:10:49.047: INFO: Pod "pod-84ead519-1c39-4da8-b4f9-bce9194b0a7c": Phase="Pending", Reason="", readiness=false. Elapsed: 245.105766ms
Oct 11 05:10:51.292: INFO: Pod "pod-84ead519-1c39-4da8-b4f9-bce9194b0a7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.491059596s
STEP: Saw pod success
Oct 11 05:10:51.293: INFO: Pod "pod-84ead519-1c39-4da8-b4f9-bce9194b0a7c" satisfied condition "Succeeded or Failed"
Oct 11 05:10:51.538: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-84ead519-1c39-4da8-b4f9-bce9194b0a7c container test-container: <nil>
STEP: delete the pod
Oct 11 05:10:52.041: INFO: Waiting for pod pod-84ead519-1c39-4da8-b4f9-bce9194b0a7c to disappear
Oct 11 05:10:52.286: INFO: Pod pod-84ead519-1c39-4da8-b4f9-bce9194b0a7c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.454 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Oct 11 05:10:39.819: INFO: PersistentVolumeClaim pvc-8884f found but phase is Pending instead of Bound.
Oct 11 05:10:42.062: INFO: PersistentVolumeClaim pvc-8884f found and phase=Bound (9.184767199s)
Oct 11 05:10:42.063: INFO: Waiting up to 3m0s for PersistentVolume local-vdxj8 to have phase Bound
Oct 11 05:10:42.298: INFO: PersistentVolume local-vdxj8 found and phase=Bound (235.503136ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xm2l
STEP: Creating a pod to test subpath
Oct 11 05:10:43.012: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xm2l" in namespace "provisioning-9640" to be "Succeeded or Failed"
Oct 11 05:10:43.249: INFO: Pod "pod-subpath-test-preprovisionedpv-xm2l": Phase="Pending", Reason="", readiness=false. Elapsed: 236.961816ms
Oct 11 05:10:45.484: INFO: Pod "pod-subpath-test-preprovisionedpv-xm2l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472510295s
Oct 11 05:10:47.719: INFO: Pod "pod-subpath-test-preprovisionedpv-xm2l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707348833s
Oct 11 05:10:49.955: INFO: Pod "pod-subpath-test-preprovisionedpv-xm2l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.942697414s
STEP: Saw pod success
Oct 11 05:10:49.955: INFO: Pod "pod-subpath-test-preprovisionedpv-xm2l" satisfied condition "Succeeded or Failed"
Oct 11 05:10:50.189: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-xm2l container test-container-subpath-preprovisionedpv-xm2l: <nil>
STEP: delete the pod
Oct 11 05:10:50.666: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xm2l to disappear
Oct 11 05:10:50.901: INFO: Pod pod-subpath-test-preprovisionedpv-xm2l no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xm2l
Oct 11 05:10:50.901: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xm2l" in namespace "provisioning-9640"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:54.052: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:10:53.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4365" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Oct 11 05:10:38.928: INFO: PersistentVolumeClaim pvc-c2zwr found but phase is Pending instead of Bound.
Oct 11 05:10:41.166: INFO: PersistentVolumeClaim pvc-c2zwr found and phase=Bound (2.472504502s)
Oct 11 05:10:41.166: INFO: Waiting up to 3m0s for PersistentVolume local-bwh6h to have phase Bound
Oct 11 05:10:41.419: INFO: PersistentVolume local-bwh6h found and phase=Bound (252.986256ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-b4jq
STEP: Creating a pod to test subpath
Oct 11 05:10:42.154: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-b4jq" in namespace "provisioning-5937" to be "Succeeded or Failed"
Oct 11 05:10:42.391: INFO: Pod "pod-subpath-test-preprovisionedpv-b4jq": Phase="Pending", Reason="", readiness=false. Elapsed: 237.399396ms
Oct 11 05:10:44.626: INFO: Pod "pod-subpath-test-preprovisionedpv-b4jq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472009836s
Oct 11 05:10:46.861: INFO: Pod "pod-subpath-test-preprovisionedpv-b4jq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707142494s
Oct 11 05:10:49.097: INFO: Pod "pod-subpath-test-preprovisionedpv-b4jq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.942562731s
STEP: Saw pod success
Oct 11 05:10:49.097: INFO: Pod "pod-subpath-test-preprovisionedpv-b4jq" satisfied condition "Succeeded or Failed"
Oct 11 05:10:49.331: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-b4jq container test-container-subpath-preprovisionedpv-b4jq: <nil>
STEP: delete the pod
Oct 11 05:10:49.806: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-b4jq to disappear
Oct 11 05:10:50.042: INFO: Pod pod-subpath-test-preprovisionedpv-b4jq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-b4jq
Oct 11 05:10:50.042: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-b4jq" in namespace "provisioning-5937"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:64.431 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:10:56.366: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 70 lines ...
Oct 11 05:10:39.049: INFO: PersistentVolumeClaim pvc-86wqw found but phase is Pending instead of Bound.
Oct 11 05:10:41.295: INFO: PersistentVolumeClaim pvc-86wqw found and phase=Bound (15.966918077s)
Oct 11 05:10:41.295: INFO: Waiting up to 3m0s for PersistentVolume local-fh7p5 to have phase Bound
Oct 11 05:10:41.547: INFO: PersistentVolume local-fh7p5 found and phase=Bound (251.767896ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hm4n
STEP: Creating a pod to test subpath
Oct 11 05:10:42.278: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hm4n" in namespace "provisioning-7687" to be "Succeeded or Failed"
Oct 11 05:10:42.521: INFO: Pod "pod-subpath-test-preprovisionedpv-hm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 242.340228ms
Oct 11 05:10:44.766: INFO: Pod "pod-subpath-test-preprovisionedpv-hm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.488166276s
Oct 11 05:10:47.009: INFO: Pod "pod-subpath-test-preprovisionedpv-hm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.730558685s
Oct 11 05:10:49.252: INFO: Pod "pod-subpath-test-preprovisionedpv-hm4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.973270831s
STEP: Saw pod success
Oct 11 05:10:49.252: INFO: Pod "pod-subpath-test-preprovisionedpv-hm4n" satisfied condition "Succeeded or Failed"
Oct 11 05:10:49.494: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-hm4n container test-container-subpath-preprovisionedpv-hm4n: <nil>
STEP: delete the pod
Oct 11 05:10:49.992: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hm4n to disappear
Oct 11 05:10:50.234: INFO: Pod pod-subpath-test-preprovisionedpv-hm4n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hm4n
Oct 11 05:10:50.234: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hm4n" in namespace "provisioning-7687"
STEP: Creating pod pod-subpath-test-preprovisionedpv-hm4n
STEP: Creating a pod to test subpath
Oct 11 05:10:50.722: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hm4n" in namespace "provisioning-7687" to be "Succeeded or Failed"
Oct 11 05:10:50.966: INFO: Pod "pod-subpath-test-preprovisionedpv-hm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 244.007136ms
Oct 11 05:10:53.209: INFO: Pod "pod-subpath-test-preprovisionedpv-hm4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.486576886s
STEP: Saw pod success
Oct 11 05:10:53.209: INFO: Pod "pod-subpath-test-preprovisionedpv-hm4n" satisfied condition "Succeeded or Failed"
Oct 11 05:10:53.451: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-hm4n container test-container-subpath-preprovisionedpv-hm4n: <nil>
STEP: delete the pod
Oct 11 05:10:53.943: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hm4n to disappear
Oct 11 05:10:54.186: INFO: Pod pod-subpath-test-preprovisionedpv-hm4n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hm4n
Oct 11 05:10:54.186: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hm4n" in namespace "provisioning-7687"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-7p26
STEP: Creating a pod to test atomic-volume-subpath
Oct 11 05:10:37.395: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7p26" in namespace "subpath-1744" to be "Succeeded or Failed"
Oct 11 05:10:37.635: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Pending", Reason="", readiness=false. Elapsed: 239.932926ms
Oct 11 05:10:39.875: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Running", Reason="", readiness=true. Elapsed: 2.480415443s
Oct 11 05:10:42.125: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Running", Reason="", readiness=true. Elapsed: 4.729976774s
Oct 11 05:10:44.369: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Running", Reason="", readiness=true. Elapsed: 6.973688933s
Oct 11 05:10:46.609: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Running", Reason="", readiness=true. Elapsed: 9.214523582s
Oct 11 05:10:48.851: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Running", Reason="", readiness=true. Elapsed: 11.456309409s
Oct 11 05:10:51.092: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Running", Reason="", readiness=true. Elapsed: 13.69726852s
Oct 11 05:10:53.333: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Running", Reason="", readiness=true. Elapsed: 15.93847052s
Oct 11 05:10:55.574: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Running", Reason="", readiness=true. Elapsed: 18.17908102s
Oct 11 05:10:57.815: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Running", Reason="", readiness=true. Elapsed: 20.419587812s
Oct 11 05:11:00.058: INFO: Pod "pod-subpath-test-projected-7p26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.662575164s
STEP: Saw pod success
Oct 11 05:11:00.058: INFO: Pod "pod-subpath-test-projected-7p26" satisfied condition "Succeeded or Failed"
Oct 11 05:11:00.302: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-subpath-test-projected-7p26 container test-container-subpath-projected-7p26: <nil>
STEP: delete the pod
Oct 11 05:11:00.788: INFO: Waiting for pod pod-subpath-test-projected-7p26 to disappear
Oct 11 05:11:01.028: INFO: Pod pod-subpath-test-projected-7p26 no longer exists
STEP: Deleting pod pod-subpath-test-projected-7p26
Oct 11 05:11:01.028: INFO: Deleting pod "pod-subpath-test-projected-7p26" in namespace "subpath-1744"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:01.788: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":4,"skipped":24,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:02.535: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 83 lines ...
• [SLOW TEST:8.881 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:194
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":4,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:02.973: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 161 lines ...
• [SLOW TEST:6.161 seconds]
[sig-network] NetworkPolicy API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:05.030: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Oct 11 05:11:04.051: INFO: Waiting up to 5m0s for pod "busybox-user-0-27f6df36-5a7c-4f6d-b1de-af7d2a00480b" in namespace "security-context-test-9480" to be "Succeeded or Failed"
Oct 11 05:11:04.296: INFO: Pod "busybox-user-0-27f6df36-5a7c-4f6d-b1de-af7d2a00480b": Phase="Pending", Reason="", readiness=false. Elapsed: 245.629096ms
Oct 11 05:11:06.543: INFO: Pod "busybox-user-0-27f6df36-5a7c-4f6d-b1de-af7d2a00480b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.492586227s
Oct 11 05:11:06.543: INFO: Pod "busybox-user-0-27f6df36-5a7c-4f6d-b1de-af7d2a00480b" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:11:06.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9480" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:01.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-4b3f8010-bdf9-43e2-909e-8e3fd84be0a4
STEP: Creating a pod to test consume secrets
Oct 11 05:11:03.515: INFO: Waiting up to 5m0s for pod "pod-secrets-07b05d0b-2794-41b9-96ef-c4a805544bed" in namespace "secrets-691" to be "Succeeded or Failed"
Oct 11 05:11:03.755: INFO: Pod "pod-secrets-07b05d0b-2794-41b9-96ef-c4a805544bed": Phase="Pending", Reason="", readiness=false. Elapsed: 240.039086ms
Oct 11 05:11:05.996: INFO: Pod "pod-secrets-07b05d0b-2794-41b9-96ef-c4a805544bed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.481429868s
STEP: Saw pod success
Oct 11 05:11:05.997: INFO: Pod "pod-secrets-07b05d0b-2794-41b9-96ef-c4a805544bed" satisfied condition "Succeeded or Failed"
Oct 11 05:11:06.237: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-secrets-07b05d0b-2794-41b9-96ef-c4a805544bed container secret-env-test: <nil>
STEP: delete the pod
Oct 11 05:11:06.768: INFO: Waiting for pod pod-secrets-07b05d0b-2794-41b9-96ef-c4a805544bed to disappear
Oct 11 05:11:07.007: INFO: Pod pod-secrets-07b05d0b-2794-41b9-96ef-c4a805544bed no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 60 lines ...
Oct 11 05:10:06.735: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-z72f2] to have phase Bound
Oct 11 05:10:06.973: INFO: PersistentVolumeClaim pvc-z72f2 found and phase=Bound (238.041876ms)
STEP: Deleting the previously created pod
Oct 11 05:10:14.165: INFO: Deleting pod "pvc-volume-tester-g7dx2" in namespace "csi-mock-volumes-415"
Oct 11 05:10:14.404: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g7dx2" to be fully deleted
STEP: Checking CSI driver logs
Oct 11 05:10:29.123: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/98946f7c-2e9b-4f8c-af20-4d40daccfd1e/volumes/kubernetes.io~csi/pvc-1bb17108-e9a1-46c8-b373-caa1c2ef9496/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-g7dx2
Oct 11 05:10:29.123: INFO: Deleting pod "pvc-volume-tester-g7dx2" in namespace "csi-mock-volumes-415"
STEP: Deleting claim pvc-z72f2
Oct 11 05:10:29.859: INFO: Waiting up to 2m0s for PersistentVolume pvc-1bb17108-e9a1-46c8-b373-caa1c2ef9496 to get deleted
Oct 11 05:10:30.098: INFO: PersistentVolume pvc-1bb17108-e9a1-46c8-b373-caa1c2ef9496 was removed
STEP: Deleting storageclass csi-mock-volumes-415-scwll2m
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":1,"skipped":9,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 80 lines ...
Oct 11 05:11:13.149: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct 11 05:11:13.150: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6264 describe pod agnhost-primary-w8bfn'
Oct 11 05:11:14.427: INFO: stderr: ""
Oct 11 05:11:14.427: INFO: stdout: "Name:         agnhost-primary-w8bfn\nNamespace:    kubectl-6264\nPriority:     0\nNode:         ip-172-20-33-34.ap-south-1.compute.internal/172.20.33.34\nStart Time:   Mon, 11 Oct 2021 05:11:09 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.4.55\nIPs:\n  IP:           100.96.4.55\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://83c4c5a914b67b6a49f170a575ee20d3efb2001226de1de7d33c20d084f2459b\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 11 Oct 2021 05:11:10 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kjdth (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-kjdth:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  5s    default-scheduler  Successfully assigned kubectl-6264/agnhost-primary-w8bfn to ip-172-20-33-34.ap-south-1.compute.internal\n  Normal  Pulled     4s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    4s    kubelet            Created container agnhost-primary\n  Normal  Started    4s    kubelet            Started container agnhost-primary\n"
Oct 11 05:11:14.427: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6264 describe rc agnhost-primary'
Oct 11 05:11:15.962: INFO: stderr: ""
Oct 11 05:11:15.962: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-6264\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  6s    replication-controller  Created pod: agnhost-primary-w8bfn\n"
Oct 11 05:11:15.962: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6264 describe service agnhost-primary'
Oct 11 05:11:17.465: INFO: stderr: ""
Oct 11 05:11:17.465: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-6264\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.65.251.63\nIPs:               100.65.251.63\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.4.55:6379\nSession Affinity:  None\nEvents:            <none>\n"
Oct 11 05:11:17.711: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6264 describe node ip-172-20-33-34.ap-south-1.compute.internal'
Oct 11 05:11:19.783: INFO: stderr: ""
Oct 11 05:11:19.783: INFO: stdout: "Name:               ip-172-20-33-34.ap-south-1.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=ap-south-1\n                    failure-domain.beta.kubernetes.io/zone=ap-south-1a\n                    kops.k8s.io/instancegroup=nodes-ap-south-1a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-33-34.ap-south-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.medium\n                    topology.kubernetes.io/region=ap-south-1\n                    topology.kubernetes.io/zone=ap-south-1a\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 11 Oct 2021 05:06:09 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-33-34.ap-south-1.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 11 Oct 2021 05:11:17 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Mon, 11 Oct 2021 05:06:12 +0000   Mon, 11 Oct 2021 05:06:12 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Mon, 11 Oct 2021 05:10:49 +0000   Mon, 11 Oct 2021 05:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 11 Oct 2021 05:10:49 +0000   Mon, 11 Oct 2021 05:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 11 Oct 2021 05:10:49 +0000   Mon, 11 Oct 2021 05:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 11 Oct 2021 05:10:49 +0000   Mon, 11 Oct 2021 05:06:19 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:   172.20.33.34\n  ExternalIP:   13.235.245.56\n  Hostname:     ip-172-20-33-34.ap-south-1.compute.internal\n  InternalDNS:  ip-172-20-33-34.ap-south-1.compute.internal\n  ExternalDNS:  ec2-13-235-245-56.ap-south-1.compute.amazonaws.com\nCapacity:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           50319340Ki\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3764932Ki\n  pods:                        110\nAllocatable:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           46374303668\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3662532Ki\n  pods:                        110\nSystem Info:\n  Machine ID:                 ec29621daf83ff588df19e70f2e6ec97\n  System UUID:                ec29621d-af83-ff58-8df1-9e70f2e6ec97\n  Boot ID:                    dd8240ee-bc84-4886-9b45-bd98192525bb\n  Kernel Version:             4.18.0-305.12.1.el8_4.x86_64\n  OS Image:                   Red Hat Enterprise Linux 8.4 (Ootpa)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.11\n  Kubelet Version:            v1.21.5\n  Kube-Proxy Version:         v1.21.5\nPodCIDR:                      100.96.4.0/24\nPodCIDRs:                     100.96.4.0/24\nProviderID:                   aws:///ap-south-1a/i-0fa79d1cf19aa88af\nNon-terminated Pods:          (14 in total)\n  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---\n  configmap-3621              pod-configmaps-0684d19d-afcc-4c7b-a1e9-7008697ecc41           0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s\n  cronjob-8869                concurrent-27232151-cgdlj                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s\n  deployment-5807             test-rollover-deployment-98c5f4599-btw7z                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s\n  kube-system                 kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal        100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m54s\n  kubectl-4367                httpd                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s\n  kubectl-6264                agnhost-primary-w8bfn                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\n  nettest-3545                netserver-0                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s\n  proxy-5817                  proxy-service-tsxt4-jdrv6                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s\n  services-4899               service-headless-88tgg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s\n  services-4899               service-headless-toggled-tvnmv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s\n  services-4899               verify-service-up-exec-pod-h5pfp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s\n  services-9774               externalsvc-hbl5z                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\n  volumemode-9625             hostexec-ip-172-20-33-34.ap-south-1.compute.internal-n4564    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s\n  volumemode-9625             pod-8b40bde6-48f5-4faf-9000-08c194059581                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                    Requests   Limits\n  --------                    --------   ------\n  cpu                         100m (5%)  0 (0%)\n  memory                      0 (0%)     0 (0%)\n  ephemeral-storage           0 (0%)     0 (0%)\n  hugepages-1Gi               0 (0%)     0 (0%)\n  hugepages-2Mi               0 (0%)     0 (0%)\n  attachable-volumes-aws-ebs  0          0\nEvents:\n  Type     Reason                   Age                    From        Message\n  ----     ------                   ----                   ----        -------\n  Normal   Starting                 6m11s                  kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity      6m11s                  kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeAllocatableEnforced  6m11s                  kubelet     Updated Node Allocatable limit across pods\n  Normal   NodeHasNoDiskPressure    5m41s (x7 over 6m11s)  kubelet     Node ip-172-20-33-34.ap-south-1.compute.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     5m41s (x7 over 6m11s)  kubelet     Node ip-172-20-33-34.ap-south-1.compute.internal status is now: NodeHasSufficientPID\n  Normal   NodeHasSufficientMemory  5m10s (x8 over 6m11s)  kubelet     Node ip-172-20-33-34.ap-south-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal   Starting                 5m4s                   kube-proxy  Starting kube-proxy.\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:21.805: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 74 lines ...
Oct 11 05:10:05.940: INFO: PersistentVolumeClaim csi-hostpathsmdml found but phase is Pending instead of Bound.
Oct 11 05:10:08.178: INFO: PersistentVolumeClaim csi-hostpathsmdml found but phase is Pending instead of Bound.
Oct 11 05:10:10.416: INFO: PersistentVolumeClaim csi-hostpathsmdml found but phase is Pending instead of Bound.
Oct 11 05:10:12.661: INFO: PersistentVolumeClaim csi-hostpathsmdml found and phase=Bound (47.256328878s)
STEP: Creating pod pod-subpath-test-dynamicpv-vdsv
STEP: Creating a pod to test subpath
Oct 11 05:10:13.375: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vdsv" in namespace "provisioning-201" to be "Succeeded or Failed"
Oct 11 05:10:13.613: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 237.316485ms
Oct 11 05:10:15.851: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47600055s
Oct 11 05:10:18.090: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.714737256s
Oct 11 05:10:20.332: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956626581s
Oct 11 05:10:22.570: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 9.194752707s
Oct 11 05:10:24.812: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 11.436646815s
Oct 11 05:10:27.053: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.678069302s
Oct 11 05:10:29.325: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 15.949808348s
Oct 11 05:10:31.565: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.189417467s
Oct 11 05:10:33.803: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 20.428072293s
Oct 11 05:10:36.041: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.665988091s
STEP: Saw pod success
Oct 11 05:10:36.041: INFO: Pod "pod-subpath-test-dynamicpv-vdsv" satisfied condition "Succeeded or Failed"
Oct 11 05:10:36.279: INFO: Trying to get logs from node ip-172-20-42-144.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-vdsv container test-container-subpath-dynamicpv-vdsv: <nil>
STEP: delete the pod
Oct 11 05:10:36.795: INFO: Waiting for pod pod-subpath-test-dynamicpv-vdsv to disappear
Oct 11 05:10:37.031: INFO: Pod pod-subpath-test-dynamicpv-vdsv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vdsv
Oct 11 05:10:37.032: INFO: Deleting pod "pod-subpath-test-dynamicpv-vdsv" in namespace "provisioning-201"
STEP: Creating pod pod-subpath-test-dynamicpv-vdsv
STEP: Creating a pod to test subpath
Oct 11 05:10:37.507: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vdsv" in namespace "provisioning-201" to be "Succeeded or Failed"
Oct 11 05:10:37.744: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 237.226976ms
Oct 11 05:10:39.987: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479529823s
Oct 11 05:10:42.227: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.720263754s
Oct 11 05:10:44.470: INFO: Pod "pod-subpath-test-dynamicpv-vdsv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.963128793s
STEP: Saw pod success
Oct 11 05:10:44.470: INFO: Pod "pod-subpath-test-dynamicpv-vdsv" satisfied condition "Succeeded or Failed"
Oct 11 05:10:44.708: INFO: Trying to get logs from node ip-172-20-42-144.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-vdsv container test-container-subpath-dynamicpv-vdsv: <nil>
STEP: delete the pod
Oct 11 05:10:45.197: INFO: Waiting for pod pod-subpath-test-dynamicpv-vdsv to disappear
Oct 11 05:10:45.434: INFO: Pod pod-subpath-test-dynamicpv-vdsv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vdsv
Oct 11 05:10:45.435: INFO: Deleting pod "pod-subpath-test-dynamicpv-vdsv" in namespace "provisioning-201"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Oct 11 05:11:09.647: INFO: PersistentVolumeClaim pvc-rwt2h found but phase is Pending instead of Bound.
Oct 11 05:11:11.894: INFO: PersistentVolumeClaim pvc-rwt2h found and phase=Bound (9.239249974s)
Oct 11 05:11:11.894: INFO: Waiting up to 3m0s for PersistentVolume local-48p5d to have phase Bound
Oct 11 05:11:12.140: INFO: PersistentVolume local-48p5d found and phase=Bound (245.985635ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pxcf
STEP: Creating a pod to test subpath
Oct 11 05:11:12.887: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pxcf" in namespace "provisioning-4277" to be "Succeeded or Failed"
Oct 11 05:11:13.133: INFO: Pod "pod-subpath-test-preprovisionedpv-pxcf": Phase="Pending", Reason="", readiness=false. Elapsed: 245.627616ms
Oct 11 05:11:15.379: INFO: Pod "pod-subpath-test-preprovisionedpv-pxcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491240536s
Oct 11 05:11:17.626: INFO: Pod "pod-subpath-test-preprovisionedpv-pxcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.738257459s
Oct 11 05:11:19.871: INFO: Pod "pod-subpath-test-preprovisionedpv-pxcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.983752869s
Oct 11 05:11:22.120: INFO: Pod "pod-subpath-test-preprovisionedpv-pxcf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.232380942s
Oct 11 05:11:24.365: INFO: Pod "pod-subpath-test-preprovisionedpv-pxcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.477808026s
STEP: Saw pod success
Oct 11 05:11:24.365: INFO: Pod "pod-subpath-test-preprovisionedpv-pxcf" satisfied condition "Succeeded or Failed"
Oct 11 05:11:24.610: INFO: Trying to get logs from node ip-172-20-42-144.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-pxcf container test-container-subpath-preprovisionedpv-pxcf: <nil>
STEP: delete the pod
Oct 11 05:11:25.119: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pxcf to disappear
Oct 11 05:11:25.364: INFO: Pod pod-subpath-test-preprovisionedpv-pxcf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pxcf
Oct 11 05:11:25.364: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pxcf" in namespace "provisioning-4277"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:28.691: INFO: Driver emptydir doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 242 lines ...
Oct 11 05:10:23.806: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathl2fhm] to have phase Bound
Oct 11 05:10:24.044: INFO: PersistentVolumeClaim csi-hostpathl2fhm found but phase is Pending instead of Bound.
Oct 11 05:10:26.285: INFO: PersistentVolumeClaim csi-hostpathl2fhm found but phase is Pending instead of Bound.
Oct 11 05:10:28.523: INFO: PersistentVolumeClaim csi-hostpathl2fhm found and phase=Bound (4.71711774s)
STEP: Expanding non-expandable pvc
Oct 11 05:10:29.007: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct 11 05:10:29.499: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:31.976: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:33.975: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:35.976: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:37.975: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:39.975: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:41.979: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:43.979: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:45.975: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:47.975: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:49.976: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:51.980: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:53.977: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:55.975: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:57.974: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:10:59.974: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 11 05:11:00.451: INFO: Error updating pvc csi-hostpathl2fhm: persistentvolumeclaims "csi-hostpathl2fhm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Oct 11 05:11:00.451: INFO: Deleting PersistentVolumeClaim "csi-hostpathl2fhm"
Oct 11 05:11:00.691: INFO: Waiting up to 5m0s for PersistentVolume pvc-1de039ed-c981-407d-b561-5c2c1cb158a0 to get deleted
Oct 11 05:11:00.929: INFO: PersistentVolume pvc-1de039ed-c981-407d-b561-5c2c1cb158a0 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-7443
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":21,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":5,"skipped":14,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:20.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
• [SLOW TEST:13.266 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":6,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:142.176 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:221
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 351 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:37.180: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:07.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
• [SLOW TEST:30.700 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":5,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 37 lines ...
STEP: Deleting pod hostexec-ip-172-20-33-34.ap-south-1.compute.internal-n4564 in namespace volumemode-9625
Oct 11 05:11:22.294: INFO: Deleting pod "pod-8b40bde6-48f5-4faf-9000-08c194059581" in namespace "volumemode-9625"
Oct 11 05:11:22.533: INFO: Wait up to 5m0s for pod "pod-8b40bde6-48f5-4faf-9000-08c194059581" to be fully deleted
STEP: Deleting pv and pvc
Oct 11 05:11:25.009: INFO: Deleting PersistentVolumeClaim "pvc-dws7q"
Oct 11 05:11:25.247: INFO: Deleting PersistentVolume "aws-jd9lg"
Oct 11 05:11:25.846: INFO: Couldn't delete PD "aws://ap-south-1a/vol-067ddbddf0efbf2a4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-067ddbddf0efbf2a4 is currently attached to i-0fa79d1cf19aa88af
	status code: 400, request id: f40a3899-e785-4e3e-ac2e-dd8acce58151
Oct 11 05:11:31.995: INFO: Couldn't delete PD "aws://ap-south-1a/vol-067ddbddf0efbf2a4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-067ddbddf0efbf2a4 is currently attached to i-0fa79d1cf19aa88af
	status code: 400, request id: 29104f19-e76a-418d-9072-c5dd971e40c7
Oct 11 05:11:38.157: INFO: Successfully deleted PD "aws://ap-south-1a/vol-067ddbddf0efbf2a4".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:11:38.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-9625" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:38.660: INFO: Only supported for providers [openstack] (not aws)
... skipping 41 lines ...
• [SLOW TEST:70.689 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:40.310: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 71 lines ...
• [SLOW TEST:85.680 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:11:41.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5870" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 175 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
Oct 11 05:10:40.370: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-snbhb] to have phase Bound
Oct 11 05:10:40.619: INFO: PersistentVolumeClaim pvc-snbhb found and phase=Bound (249.093195ms)
STEP: Deleting the previously created pod
Oct 11 05:11:01.846: INFO: Deleting pod "pvc-volume-tester-z4hkq" in namespace "csi-mock-volumes-4741"
Oct 11 05:11:02.086: INFO: Wait up to 5m0s for pod "pvc-volume-tester-z4hkq" to be fully deleted
STEP: Checking CSI driver logs
Oct 11 05:11:10.802: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7ab3c28e-3ee7-4782-a253-21ca422da748/volumes/kubernetes.io~csi/pvc-b96b98c3-abea-496a-a770-38ee08b33451/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-z4hkq
Oct 11 05:11:10.802: INFO: Deleting pod "pvc-volume-tester-z4hkq" in namespace "csi-mock-volumes-4741"
STEP: Deleting claim pvc-snbhb
Oct 11 05:11:11.515: INFO: Waiting up to 2m0s for PersistentVolume pvc-b96b98c3-abea-496a-a770-38ee08b33451 to get deleted
Oct 11 05:11:11.753: INFO: PersistentVolume pvc-b96b98c3-abea-496a-a770-38ee08b33451 was removed
STEP: Deleting storageclass csi-mock-volumes-4741-scns9sr
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":3,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Pod Disks
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: Destroying namespace "pod-disks-9599" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.666 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 89 lines ...
Oct 11 05:11:08.315: INFO: PersistentVolumeClaim pvc-qsqs5 found but phase is Pending instead of Bound.
Oct 11 05:11:10.561: INFO: PersistentVolumeClaim pvc-qsqs5 found and phase=Bound (11.480103034s)
Oct 11 05:11:10.561: INFO: Waiting up to 3m0s for PersistentVolume local-x6fx4 to have phase Bound
Oct 11 05:11:10.805: INFO: PersistentVolume local-x6fx4 found and phase=Bound (244.559156ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-crqt
STEP: Creating a pod to test subpath
Oct 11 05:11:11.546: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-crqt" in namespace "provisioning-1752" to be "Succeeded or Failed"
Oct 11 05:11:11.791: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 245.171356ms
Oct 11 05:11:14.037: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490740107s
Oct 11 05:11:16.282: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.735824379s
Oct 11 05:11:18.527: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.980997801s
Oct 11 05:11:20.773: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 9.227304282s
Oct 11 05:11:23.019: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.472786806s
Oct 11 05:11:25.265: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.71931555s
Oct 11 05:11:27.511: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 15.965417843s
Oct 11 05:11:29.758: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.211795365s
STEP: Saw pod success
Oct 11 05:11:29.758: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt" satisfied condition "Succeeded or Failed"
Oct 11 05:11:30.005: INFO: Trying to get logs from node ip-172-20-42-144.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-crqt container test-container-subpath-preprovisionedpv-crqt: <nil>
STEP: delete the pod
Oct 11 05:11:30.504: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-crqt to disappear
Oct 11 05:11:30.749: INFO: Pod pod-subpath-test-preprovisionedpv-crqt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-crqt
Oct 11 05:11:30.749: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-crqt" in namespace "provisioning-1752"
STEP: Creating pod pod-subpath-test-preprovisionedpv-crqt
STEP: Creating a pod to test subpath
Oct 11 05:11:31.249: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-crqt" in namespace "provisioning-1752" to be "Succeeded or Failed"
Oct 11 05:11:31.501: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 252.445546ms
Oct 11 05:11:33.748: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499429669s
Oct 11 05:11:36.000: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.751073822s
Oct 11 05:11:38.246: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.996763646s
Oct 11 05:11:40.491: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.24202255s
STEP: Saw pod success
Oct 11 05:11:40.491: INFO: Pod "pod-subpath-test-preprovisionedpv-crqt" satisfied condition "Succeeded or Failed"
Oct 11 05:11:40.736: INFO: Trying to get logs from node ip-172-20-42-144.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-crqt container test-container-subpath-preprovisionedpv-crqt: <nil>
STEP: delete the pod
Oct 11 05:11:41.236: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-crqt to disappear
Oct 11 05:11:41.481: INFO: Pod pod-subpath-test-preprovisionedpv-crqt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-crqt
Oct 11 05:11:41.481: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-crqt" in namespace "provisioning-1752"
... skipping 50 lines ...
• [SLOW TEST:13.585 seconds]
[sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:40.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct 11 05:11:41.779: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2e8a65d2-d4e2-472b-a92b-96a90dc6aa2d" in namespace "security-context-test-3612" to be "Succeeded or Failed"
Oct 11 05:11:42.015: INFO: Pod "busybox-readonly-false-2e8a65d2-d4e2-472b-a92b-96a90dc6aa2d": Phase="Pending", Reason="", readiness=false. Elapsed: 236.803707ms
Oct 11 05:11:44.253: INFO: Pod "busybox-readonly-false-2e8a65d2-d4e2-472b-a92b-96a90dc6aa2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474260731s
Oct 11 05:11:46.495: INFO: Pod "busybox-readonly-false-2e8a65d2-d4e2-472b-a92b-96a90dc6aa2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.716574585s
Oct 11 05:11:46.495: INFO: Pod "busybox-readonly-false-2e8a65d2-d4e2-472b-a92b-96a90dc6aa2d" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:11:46.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3612" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:47.010: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-8d7f0070-d7bc-4b65-a47b-8e12136c9dce
STEP: Creating a pod to test consume secrets
Oct 11 05:11:44.200: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d3583a27-70db-4fc3-9745-8c526674e441" in namespace "projected-5839" to be "Succeeded or Failed"
Oct 11 05:11:44.444: INFO: Pod "pod-projected-secrets-d3583a27-70db-4fc3-9745-8c526674e441": Phase="Pending", Reason="", readiness=false. Elapsed: 244.501965ms
Oct 11 05:11:46.690: INFO: Pod "pod-projected-secrets-d3583a27-70db-4fc3-9745-8c526674e441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.48995069s
STEP: Saw pod success
Oct 11 05:11:46.690: INFO: Pod "pod-projected-secrets-d3583a27-70db-4fc3-9745-8c526674e441" satisfied condition "Succeeded or Failed"
Oct 11 05:11:46.936: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-projected-secrets-d3583a27-70db-4fc3-9745-8c526674e441 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 11 05:11:47.433: INFO: Waiting for pod pod-projected-secrets-d3583a27-70db-4fc3-9745-8c526674e441 to disappear
Oct 11 05:11:47.682: INFO: Pod pod-projected-secrets-d3583a27-70db-4fc3-9745-8c526674e441 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 56 lines ...
• [SLOW TEST:12.801 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:51.052: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 91 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 107 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Oct 11 05:11:36.168: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4367 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Oct 11 05:11:38.730: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Oct 11 05:11:38.730: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4367 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Oct 11 05:11:41.266: INFO: rc: 255
Oct 11 05:11:41.266: INFO: got err error running /tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4367 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I1011 05:11:40.979097     187 merged_client_builder.go:163] Using in-cluster namespace
I1011 05:11:40.979363     187 merged_client_builder.go:121] Using in-cluster configuration
I1011 05:11:40.984740     187 merged_client_builder.go:121] Using in-cluster configuration
I1011 05:11:40.992553     187 merged_client_builder.go:121] Using in-cluster configuration
I1011 05:11:40.992822     187 round_trippers.go:432] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-4367/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F1011 05:11:40.997781     187 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0000da000, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x308aa00, 0xc000000003, 0x0, 0x0, 0xc000789110, 0x261bfd7, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x308aa00, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0001993b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000384d40, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x209ade0, 0xc0008ee8d0, 0x1f24400)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000379080, 0xc00043c5a0, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Oct 11 05:11:41.266: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4367 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Oct 11 05:11:43.734: INFO: rc: 255
Oct 11 05:11:43.734: INFO: got err error running /tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4367 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I1011 05:11:43.532754     198 merged_client_builder.go:163] Using in-cluster namespace
I1011 05:11:43.588039     198 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 54 milliseconds
I1011 05:11:43.588102     198 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I1011 05:11:43.590869     198 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 2 milliseconds
I1011 05:11:43.590945     198 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I1011 05:11:43.590961     198 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I1011 05:11:43.594590     198 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 3 milliseconds
I1011 05:11:43.594650     198 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I1011 05:11:43.596625     198 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I1011 05:11:43.596672     198 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I1011 05:11:43.600102     198 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 3 milliseconds
I1011 05:11:43.600145     198 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I1011 05:11:43.600182     198 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
F1011 05:11:43.600207     198 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc0009f01c0, 0x88, 0x1b8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x308aa00, 0xc000000003, 0x0, 0x0, 0xc00043c310, 0x261bfd7, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x308aa00, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0000dd420, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000f53e0, 0x59, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x209a140, 0xc000350c30, 0x1f24400)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000991340, 0xc00054aab0, 0x1, 0x3)
... skipping 30 lines ...
	/usr/local/go/src/net/http/client.go:396 +0x337

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Oct 11 05:11:43.734: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4367 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Oct 11 05:11:46.132: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Oct 11 05:11:46.132: INFO: stdout: "I1011 05:11:45.963458     209 merged_client_builder.go:121] Using in-cluster configuration\nI1011 05:11:45.970944     209 merged_client_builder.go:121] Using in-cluster configuration\nI1011 05:11:45.984596     209 merged_client_builder.go:121] Using in-cluster configuration\nI1011 05:11:45.995690     209 round_trippers.go:454] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 10 milliseconds\nNo resources found in invalid namespace.\n"
Oct 11 05:11:46.132: INFO: stdout: I1011 05:11:45.963458     209 merged_client_builder.go:121] Using in-cluster configuration
... skipping 74 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should handle in-cluster config
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:636
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":2,"skipped":23,"failed":1,"failures":["[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Oct 11 05:11:38.508: INFO: PersistentVolumeClaim pvc-jqf8z found but phase is Pending instead of Bound.
Oct 11 05:11:40.743: INFO: PersistentVolumeClaim pvc-jqf8z found and phase=Bound (11.416822764s)
Oct 11 05:11:40.744: INFO: Waiting up to 3m0s for PersistentVolume local-8p4bn to have phase Bound
Oct 11 05:11:40.979: INFO: PersistentVolume local-8p4bn found and phase=Bound (235.308326ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zh4j
STEP: Creating a pod to test subpath
Oct 11 05:11:41.687: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zh4j" in namespace "provisioning-1601" to be "Succeeded or Failed"
Oct 11 05:11:41.922: INFO: Pod "pod-subpath-test-preprovisionedpv-zh4j": Phase="Pending", Reason="", readiness=false. Elapsed: 235.452656ms
Oct 11 05:11:44.158: INFO: Pod "pod-subpath-test-preprovisionedpv-zh4j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471622401s
Oct 11 05:11:46.396: INFO: Pod "pod-subpath-test-preprovisionedpv-zh4j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.708714585s
Oct 11 05:11:48.633: INFO: Pod "pod-subpath-test-preprovisionedpv-zh4j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.945683919s
STEP: Saw pod success
Oct 11 05:11:48.633: INFO: Pod "pod-subpath-test-preprovisionedpv-zh4j" satisfied condition "Succeeded or Failed"
Oct 11 05:11:48.869: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-zh4j container test-container-subpath-preprovisionedpv-zh4j: <nil>
STEP: delete the pod
Oct 11 05:11:49.351: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zh4j to disappear
Oct 11 05:11:49.596: INFO: Pod pod-subpath-test-preprovisionedpv-zh4j no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zh4j
Oct 11 05:11:49.596: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zh4j" in namespace "provisioning-1601"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:52.811: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:30.691: INFO: >>> kubeConfig: /root/.kube/config
... skipping 4 lines ...
Oct 11 05:11:31.924: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct 11 05:11:33.158: INFO: Successfully created a new PD: "aws://ap-south-1a/vol-0c2531246e91a898d".
Oct 11 05:11:33.158: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-mhcg
STEP: Creating a pod to test exec-volume-test
Oct 11 05:11:33.414: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-mhcg" in namespace "volume-8365" to be "Succeeded or Failed"
Oct 11 05:11:33.659: INFO: Pod "exec-volume-test-inlinevolume-mhcg": Phase="Pending", Reason="", readiness=false. Elapsed: 245.244305ms
Oct 11 05:11:35.906: INFO: Pod "exec-volume-test-inlinevolume-mhcg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492124619s
Oct 11 05:11:38.153: INFO: Pod "exec-volume-test-inlinevolume-mhcg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.739101183s
Oct 11 05:11:40.399: INFO: Pod "exec-volume-test-inlinevolume-mhcg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.984781587s
Oct 11 05:11:42.646: INFO: Pod "exec-volume-test-inlinevolume-mhcg": Phase="Pending", Reason="", readiness=false. Elapsed: 9.231634331s
Oct 11 05:11:44.892: INFO: Pod "exec-volume-test-inlinevolume-mhcg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.477594354s
STEP: Saw pod success
Oct 11 05:11:44.892: INFO: Pod "exec-volume-test-inlinevolume-mhcg" satisfied condition "Succeeded or Failed"
Oct 11 05:11:45.137: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod exec-volume-test-inlinevolume-mhcg container exec-container-inlinevolume-mhcg: <nil>
STEP: delete the pod
Oct 11 05:11:45.634: INFO: Waiting for pod exec-volume-test-inlinevolume-mhcg to disappear
Oct 11 05:11:45.879: INFO: Pod exec-volume-test-inlinevolume-mhcg no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-mhcg
Oct 11 05:11:45.879: INFO: Deleting pod "exec-volume-test-inlinevolume-mhcg" in namespace "volume-8365"
Oct 11 05:11:46.474: INFO: Couldn't delete PD "aws://ap-south-1a/vol-0c2531246e91a898d", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c2531246e91a898d is currently attached to i-02f3e01374cb7e2c6
	status code: 400, request id: ebd8f6e2-19d2-4742-895e-281e7b5db080
Oct 11 05:11:52.636: INFO: Successfully deleted PD "aws://ap-south-1a/vol-0c2531246e91a898d".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:11:52.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8365" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:53.141: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Oct 11 05:11:46.072: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-778" to be "Succeeded or Failed"
Oct 11 05:11:46.310: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 237.628477ms
Oct 11 05:11:48.549: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477079682s
Oct 11 05:11:50.788: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715474547s
Oct 11 05:11:53.028: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.955236043s
Oct 11 05:11:53.028: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:11:53.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-778" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":4,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Ingress API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:7.379 seconds]
[sig-network] Ingress API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support creating Ingress API operations [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:48.436: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Oct 11 05:11:49.664: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 11 05:11:49.664: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-p5fl
STEP: Creating a pod to test subpath
Oct 11 05:11:49.913: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-p5fl" in namespace "provisioning-3240" to be "Succeeded or Failed"
Oct 11 05:11:50.158: INFO: Pod "pod-subpath-test-inlinevolume-p5fl": Phase="Pending", Reason="", readiness=false. Elapsed: 244.937066ms
Oct 11 05:11:52.405: INFO: Pod "pod-subpath-test-inlinevolume-p5fl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491621472s
Oct 11 05:11:54.652: INFO: Pod "pod-subpath-test-inlinevolume-p5fl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.738825719s
STEP: Saw pod success
Oct 11 05:11:54.652: INFO: Pod "pod-subpath-test-inlinevolume-p5fl" satisfied condition "Succeeded or Failed"
Oct 11 05:11:54.904: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-p5fl container test-container-subpath-inlinevolume-p5fl: <nil>
STEP: delete the pod
Oct 11 05:11:55.501: INFO: Waiting for pod pod-subpath-test-inlinevolume-p5fl to disappear
Oct 11 05:11:55.747: INFO: Pod pod-subpath-test-inlinevolume-p5fl no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-p5fl
Oct 11 05:11:55.747: INFO: Deleting pod "pod-subpath-test-inlinevolume-p5fl" in namespace "provisioning-3240"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":31,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:52.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-b3570eab-d4dc-455b-ab74-8f891e0b9408
STEP: Creating a pod to test consume secrets
Oct 11 05:11:53.770: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-742db16a-81ae-4fb9-ad08-867e411a72e7" in namespace "projected-5307" to be "Succeeded or Failed"
Oct 11 05:11:54.039: INFO: Pod "pod-projected-secrets-742db16a-81ae-4fb9-ad08-867e411a72e7": Phase="Pending", Reason="", readiness=false. Elapsed: 267.501325ms
Oct 11 05:11:56.275: INFO: Pod "pod-projected-secrets-742db16a-81ae-4fb9-ad08-867e411a72e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.504029911s
STEP: Saw pod success
Oct 11 05:11:56.275: INFO: Pod "pod-projected-secrets-742db16a-81ae-4fb9-ad08-867e411a72e7" satisfied condition "Succeeded or Failed"
Oct 11 05:11:56.511: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-projected-secrets-742db16a-81ae-4fb9-ad08-867e411a72e7 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 11 05:11:56.989: INFO: Waiting for pod pod-projected-secrets-742db16a-81ae-4fb9-ad08-867e411a72e7 to disappear
Oct 11 05:11:57.226: INFO: Pod pod-projected-secrets-742db16a-81ae-4fb9-ad08-867e411a72e7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.578 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":24,"failed":1,"failures":["[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:11:57.717: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 224 lines ...
• [SLOW TEST:105.876 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":5,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:00.674: INFO: Only supported for providers [gce gke] (not aws)
... skipping 53 lines ...
• [SLOW TEST:37.257 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":2,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:01.284: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:49.441: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct 11 05:11:50.634: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 11 05:11:50.872: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-b4g4
STEP: Creating a pod to test subpath
Oct 11 05:11:51.112: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-b4g4" in namespace "provisioning-6629" to be "Succeeded or Failed"
Oct 11 05:11:51.353: INFO: Pod "pod-subpath-test-inlinevolume-b4g4": Phase="Pending", Reason="", readiness=false. Elapsed: 240.829857ms
Oct 11 05:11:53.592: INFO: Pod "pod-subpath-test-inlinevolume-b4g4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479530633s
Oct 11 05:11:55.837: INFO: Pod "pod-subpath-test-inlinevolume-b4g4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.725193939s
Oct 11 05:11:58.076: INFO: Pod "pod-subpath-test-inlinevolume-b4g4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.964185066s
Oct 11 05:12:00.314: INFO: Pod "pod-subpath-test-inlinevolume-b4g4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.20209942s
STEP: Saw pod success
Oct 11 05:12:00.314: INFO: Pod "pod-subpath-test-inlinevolume-b4g4" satisfied condition "Succeeded or Failed"
Oct 11 05:12:00.552: INFO: Trying to get logs from node ip-172-20-42-144.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-b4g4 container test-container-volume-inlinevolume-b4g4: <nil>
STEP: delete the pod
Oct 11 05:12:01.035: INFO: Waiting for pod pod-subpath-test-inlinevolume-b4g4 to disappear
Oct 11 05:12:01.272: INFO: Pod pod-subpath-test-inlinevolume-b4g4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-b4g4
Oct 11 05:12:01.272: INFO: Deleting pod "pod-subpath-test-inlinevolume-b4g4" in namespace "provisioning-6629"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:54.441: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct 11 05:11:55.694: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 11 05:11:55.694: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kst4
STEP: Creating a pod to test subpath
Oct 11 05:11:55.935: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kst4" in namespace "provisioning-654" to be "Succeeded or Failed"
Oct 11 05:11:56.172: INFO: Pod "pod-subpath-test-inlinevolume-kst4": Phase="Pending", Reason="", readiness=false. Elapsed: 236.934866ms
Oct 11 05:11:58.415: INFO: Pod "pod-subpath-test-inlinevolume-kst4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480761172s
Oct 11 05:12:00.653: INFO: Pod "pod-subpath-test-inlinevolume-kst4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.718568937s
STEP: Saw pod success
Oct 11 05:12:00.653: INFO: Pod "pod-subpath-test-inlinevolume-kst4" satisfied condition "Succeeded or Failed"
Oct 11 05:12:00.890: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-kst4 container test-container-volume-inlinevolume-kst4: <nil>
STEP: delete the pod
Oct 11 05:12:01.374: INFO: Waiting for pod pod-subpath-test-inlinevolume-kst4 to disappear
Oct 11 05:12:01.611: INFO: Pod pod-subpath-test-inlinevolume-kst4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kst4
Oct 11 05:12:01.611: INFO: Deleting pod "pod-subpath-test-inlinevolume-kst4" in namespace "provisioning-654"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:02.594: INFO: Only supported for providers [azure] (not aws)
... skipping 69 lines ...
Oct 11 05:11:57.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Oct 11 05:11:59.234: INFO: Waiting up to 5m0s for pod "var-expansion-c9695358-7204-4012-9105-d43a4f9c56cb" in namespace "var-expansion-5418" to be "Succeeded or Failed"
Oct 11 05:11:59.481: INFO: Pod "var-expansion-c9695358-7204-4012-9105-d43a4f9c56cb": Phase="Pending", Reason="", readiness=false. Elapsed: 246.430096ms
Oct 11 05:12:01.730: INFO: Pod "var-expansion-c9695358-7204-4012-9105-d43a4f9c56cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.495532442s
STEP: Saw pod success
Oct 11 05:12:01.730: INFO: Pod "var-expansion-c9695358-7204-4012-9105-d43a4f9c56cb" satisfied condition "Succeeded or Failed"
Oct 11 05:12:01.976: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod var-expansion-c9695358-7204-4012-9105-d43a4f9c56cb container dapi-container: <nil>
STEP: delete the pod
Oct 11 05:12:02.474: INFO: Waiting for pod var-expansion-c9695358-7204-4012-9105-d43a4f9c56cb to disappear
Oct 11 05:12:02.720: INFO: Pod var-expansion-c9695358-7204-4012-9105-d43a4f9c56cb no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.453 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:03.223: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 122 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":6,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:12:00.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 11 05:12:02.098: INFO: Waiting up to 5m0s for pod "security-context-75c04ec0-120c-4113-80f7-fef2e94bb9a7" in namespace "security-context-4528" to be "Succeeded or Failed"
Oct 11 05:12:02.334: INFO: Pod "security-context-75c04ec0-120c-4113-80f7-fef2e94bb9a7": Phase="Pending", Reason="", readiness=false. Elapsed: 235.047276ms
Oct 11 05:12:04.570: INFO: Pod "security-context-75c04ec0-120c-4113-80f7-fef2e94bb9a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.470947954s
STEP: Saw pod success
Oct 11 05:12:04.570: INFO: Pod "security-context-75c04ec0-120c-4113-80f7-fef2e94bb9a7" satisfied condition "Succeeded or Failed"
Oct 11 05:12:04.805: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod security-context-75c04ec0-120c-4113-80f7-fef2e94bb9a7 container test-container: <nil>
STEP: delete the pod
Oct 11 05:12:05.287: INFO: Waiting for pod security-context-75c04ec0-120c-4113-80f7-fef2e94bb9a7 to disappear
Oct 11 05:12:05.522: INFO: Pod security-context-75c04ec0-120c-4113-80f7-fef2e94bb9a7 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.317 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:06.013: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":5,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:06.760: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 29 lines ...
Oct 11 05:12:03.364: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 11 05:12:03.365: INFO: stdout: "controller-manager scheduler etcd-0 etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of controller-manager
Oct 11 05:12:03.365: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2874 get componentstatuses controller-manager'
Oct 11 05:12:04.165: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 11 05:12:04.165: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of scheduler
Oct 11 05:12:04.165: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2874 get componentstatuses scheduler'
Oct 11 05:12:04.958: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 11 05:12:04.959: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-0
Oct 11 05:12:04.959: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2874 get componentstatuses etcd-0'
Oct 11 05:12:05.783: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 11 05:12:05.783: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-1
Oct 11 05:12:05.783: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2874 get componentstatuses etcd-1'
Oct 11 05:12:06.613: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 11 05:12:06.613: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:06.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2874" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl get componentstatuses
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:780
    should get componentstatuses
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:781
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":3,"skipped":42,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:07.136: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 109 lines ...
Oct 11 05:11:54.947: INFO: PersistentVolumeClaim pvc-bgcfj found but phase is Pending instead of Bound.
Oct 11 05:11:57.187: INFO: PersistentVolumeClaim pvc-bgcfj found and phase=Bound (9.200010761s)
Oct 11 05:11:57.187: INFO: Waiting up to 3m0s for PersistentVolume local-knwh5 to have phase Bound
Oct 11 05:11:57.425: INFO: PersistentVolume local-knwh5 found and phase=Bound (237.921406ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-85b6
STEP: Creating a pod to test subpath
Oct 11 05:11:58.141: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-85b6" in namespace "provisioning-5659" to be "Succeeded or Failed"
Oct 11 05:11:58.384: INFO: Pod "pod-subpath-test-preprovisionedpv-85b6": Phase="Pending", Reason="", readiness=false. Elapsed: 242.936557ms
Oct 11 05:12:00.624: INFO: Pod "pod-subpath-test-preprovisionedpv-85b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482490662s
Oct 11 05:12:02.871: INFO: Pod "pod-subpath-test-preprovisionedpv-85b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.730096838s
STEP: Saw pod success
Oct 11 05:12:02.871: INFO: Pod "pod-subpath-test-preprovisionedpv-85b6" satisfied condition "Succeeded or Failed"
Oct 11 05:12:03.109: INFO: Trying to get logs from node ip-172-20-43-95.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-85b6 container test-container-volume-preprovisionedpv-85b6: <nil>
STEP: delete the pod
Oct 11 05:12:03.591: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-85b6 to disappear
Oct 11 05:12:03.829: INFO: Pod pod-subpath-test-preprovisionedpv-85b6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-85b6
Oct 11 05:12:03.829: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-85b6" in namespace "provisioning-5659"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 17 lines ...
Oct 11 05:11:55.226: INFO: PersistentVolumeClaim pvc-fwd8q found but phase is Pending instead of Bound.
Oct 11 05:11:57.465: INFO: PersistentVolumeClaim pvc-fwd8q found and phase=Bound (6.970118995s)
Oct 11 05:11:57.465: INFO: Waiting up to 3m0s for PersistentVolume local-2hzgr to have phase Bound
Oct 11 05:11:57.702: INFO: PersistentVolume local-2hzgr found and phase=Bound (237.370976ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-b8kf
STEP: Creating a pod to test exec-volume-test
Oct 11 05:11:58.417: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-b8kf" in namespace "volume-1354" to be "Succeeded or Failed"
Oct 11 05:11:58.655: INFO: Pod "exec-volume-test-preprovisionedpv-b8kf": Phase="Pending", Reason="", readiness=false. Elapsed: 237.644298ms
Oct 11 05:12:00.894: INFO: Pod "exec-volume-test-preprovisionedpv-b8kf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476486773s
Oct 11 05:12:03.132: INFO: Pod "exec-volume-test-preprovisionedpv-b8kf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.714472939s
Oct 11 05:12:05.370: INFO: Pod "exec-volume-test-preprovisionedpv-b8kf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.952588375s
STEP: Saw pod success
Oct 11 05:12:05.370: INFO: Pod "exec-volume-test-preprovisionedpv-b8kf" satisfied condition "Succeeded or Failed"
Oct 11 05:12:05.607: INFO: Trying to get logs from node ip-172-20-42-144.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-b8kf container exec-container-preprovisionedpv-b8kf: <nil>
STEP: delete the pod
Oct 11 05:12:06.101: INFO: Waiting for pod exec-volume-test-preprovisionedpv-b8kf to disappear
Oct 11 05:12:06.343: INFO: Pod exec-volume-test-preprovisionedpv-b8kf no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-b8kf
Oct 11 05:12:06.343: INFO: Deleting pod "exec-volume-test-preprovisionedpv-b8kf" in namespace "volume-1354"
... skipping 46 lines ...
Oct 11 05:11:53.996: INFO: PersistentVolumeClaim pvc-4gxrr found but phase is Pending instead of Bound.
Oct 11 05:11:56.239: INFO: PersistentVolumeClaim pvc-4gxrr found and phase=Bound (13.699513978s)
Oct 11 05:11:56.239: INFO: Waiting up to 3m0s for PersistentVolume local-gwkzj to have phase Bound
Oct 11 05:11:56.490: INFO: PersistentVolume local-gwkzj found and phase=Bound (251.216197ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qdk6
STEP: Creating a pod to test subpath
Oct 11 05:11:57.214: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qdk6" in namespace "provisioning-1176" to be "Succeeded or Failed"
Oct 11 05:11:57.454: INFO: Pod "pod-subpath-test-preprovisionedpv-qdk6": Phase="Pending", Reason="", readiness=false. Elapsed: 240.359557ms
Oct 11 05:11:59.697: INFO: Pod "pod-subpath-test-preprovisionedpv-qdk6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.48295863s
Oct 11 05:12:01.938: INFO: Pod "pod-subpath-test-preprovisionedpv-qdk6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.723933147s
STEP: Saw pod success
Oct 11 05:12:01.938: INFO: Pod "pod-subpath-test-preprovisionedpv-qdk6" satisfied condition "Succeeded or Failed"
Oct 11 05:12:02.179: INFO: Trying to get logs from node ip-172-20-43-95.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-qdk6 container test-container-subpath-preprovisionedpv-qdk6: <nil>
STEP: delete the pod
Oct 11 05:12:02.674: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qdk6 to disappear
Oct 11 05:12:02.920: INFO: Pod pod-subpath-test-preprovisionedpv-qdk6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qdk6
Oct 11 05:12:02.920: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qdk6" in namespace "provisioning-1176"
STEP: Creating pod pod-subpath-test-preprovisionedpv-qdk6
STEP: Creating a pod to test subpath
Oct 11 05:12:03.404: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qdk6" in namespace "provisioning-1176" to be "Succeeded or Failed"
Oct 11 05:12:03.645: INFO: Pod "pod-subpath-test-preprovisionedpv-qdk6": Phase="Pending", Reason="", readiness=false. Elapsed: 240.333837ms
Oct 11 05:12:05.888: INFO: Pod "pod-subpath-test-preprovisionedpv-qdk6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.483450543s
STEP: Saw pod success
Oct 11 05:12:05.888: INFO: Pod "pod-subpath-test-preprovisionedpv-qdk6" satisfied condition "Succeeded or Failed"
Oct 11 05:12:06.128: INFO: Trying to get logs from node ip-172-20-43-95.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-qdk6 container test-container-subpath-preprovisionedpv-qdk6: <nil>
STEP: delete the pod
Oct 11 05:12:06.617: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qdk6 to disappear
Oct 11 05:12:06.857: INFO: Pod pod-subpath-test-preprovisionedpv-qdk6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qdk6
Oct 11 05:12:06.857: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qdk6" in namespace "provisioning-1176"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:10.151: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:09.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-68" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":4,"skipped":75,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:10.410: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 21 lines ...
Oct 11 05:12:06.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 11 05:12:07.491: INFO: Waiting up to 5m0s for pod "pod-f6ec5020-d602-4187-8e6d-10a1e5272b54" in namespace "emptydir-8457" to be "Succeeded or Failed"
Oct 11 05:12:07.726: INFO: Pod "pod-f6ec5020-d602-4187-8e6d-10a1e5272b54": Phase="Pending", Reason="", readiness=false. Elapsed: 234.620577ms
Oct 11 05:12:09.961: INFO: Pod "pod-f6ec5020-d602-4187-8e6d-10a1e5272b54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.469601723s
STEP: Saw pod success
Oct 11 05:12:09.961: INFO: Pod "pod-f6ec5020-d602-4187-8e6d-10a1e5272b54" satisfied condition "Succeeded or Failed"
Oct 11 05:12:10.196: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-f6ec5020-d602-4187-8e6d-10a1e5272b54 container test-container: <nil>
STEP: delete the pod
Oct 11 05:12:10.674: INFO: Waiting for pod pod-f6ec5020-d602-4187-8e6d-10a1e5272b54 to disappear
Oct 11 05:12:10.909: INFO: Pod pod-f6ec5020-d602-4187-8e6d-10a1e5272b54 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.308 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct 11 05:11:54.912: INFO: PersistentVolumeClaim pvc-c4thv found but phase is Pending instead of Bound.
Oct 11 05:11:57.148: INFO: PersistentVolumeClaim pvc-c4thv found and phase=Bound (13.659527849s)
Oct 11 05:11:57.148: INFO: Waiting up to 3m0s for PersistentVolume local-4hhfp to have phase Bound
Oct 11 05:11:57.382: INFO: PersistentVolume local-4hhfp found and phase=Bound (234.200817ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wjpr
STEP: Creating a pod to test subpath
Oct 11 05:11:58.088: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wjpr" in namespace "provisioning-609" to be "Succeeded or Failed"
Oct 11 05:11:58.323: INFO: Pod "pod-subpath-test-preprovisionedpv-wjpr": Phase="Pending", Reason="", readiness=false. Elapsed: 234.878936ms
Oct 11 05:12:00.559: INFO: Pod "pod-subpath-test-preprovisionedpv-wjpr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470527332s
Oct 11 05:12:02.833: INFO: Pod "pod-subpath-test-preprovisionedpv-wjpr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.744342727s
STEP: Saw pod success
Oct 11 05:12:02.833: INFO: Pod "pod-subpath-test-preprovisionedpv-wjpr" satisfied condition "Succeeded or Failed"
Oct 11 05:12:03.070: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-wjpr container test-container-subpath-preprovisionedpv-wjpr: <nil>
STEP: delete the pod
Oct 11 05:12:03.552: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wjpr to disappear
Oct 11 05:12:03.786: INFO: Pod pod-subpath-test-preprovisionedpv-wjpr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wjpr
Oct 11 05:12:03.786: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wjpr" in namespace "provisioning-609"
STEP: Creating pod pod-subpath-test-preprovisionedpv-wjpr
STEP: Creating a pod to test subpath
Oct 11 05:12:04.299: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wjpr" in namespace "provisioning-609" to be "Succeeded or Failed"
Oct 11 05:12:04.538: INFO: Pod "pod-subpath-test-preprovisionedpv-wjpr": Phase="Pending", Reason="", readiness=false. Elapsed: 238.556576ms
Oct 11 05:12:06.775: INFO: Pod "pod-subpath-test-preprovisionedpv-wjpr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.475683423s
STEP: Saw pod success
Oct 11 05:12:06.775: INFO: Pod "pod-subpath-test-preprovisionedpv-wjpr" satisfied condition "Succeeded or Failed"
Oct 11 05:12:07.009: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-wjpr container test-container-subpath-preprovisionedpv-wjpr: <nil>
STEP: delete the pod
Oct 11 05:12:07.485: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wjpr to disappear
Oct 11 05:12:07.719: INFO: Pod pod-subpath-test-preprovisionedpv-wjpr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wjpr
Oct 11 05:12:07.720: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wjpr" in namespace "provisioning-609"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":15,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:12.425: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 43 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:11:51.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:13.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4798" for this suite.


• [SLOW TEST:22.416 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":7,"skipped":51,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:13.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1366" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":8,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:13.856: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:8.356 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:15.146: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:14.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3299" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":8,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
Oct 11 05:12:11.215: INFO: The status of Pod pod-update-activedeadlineseconds-81510edc-cd9e-44b1-8187-4726a10d14cf is Running (Ready = true)
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 11 05:12:12.701: INFO: Successfully updated pod "pod-update-activedeadlineseconds-81510edc-cd9e-44b1-8187-4726a10d14cf"
Oct 11 05:12:12.701: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-81510edc-cd9e-44b1-8187-4726a10d14cf" in namespace "pods-1995" to be "terminated due to deadline exceeded"
Oct 11 05:12:12.946: INFO: Pod "pod-update-activedeadlineseconds-81510edc-cd9e-44b1-8187-4726a10d14cf": Phase="Running", Reason="", readiness=true. Elapsed: 245.227816ms
Oct 11 05:12:15.192: INFO: Pod "pod-update-activedeadlineseconds-81510edc-cd9e-44b1-8187-4726a10d14cf": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.491403062s
Oct 11 05:12:15.192: INFO: Pod "pod-update-activedeadlineseconds-81510edc-cd9e-44b1-8187-4726a10d14cf" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:15.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1995" for this suite.


• [SLOW TEST:12.460 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:15.706: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:17.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2578" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":7,"skipped":20,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:18.368: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":26,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:12:02.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
Oct 11 05:12:08.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 11 05:12:10.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 11 05:12:12.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769525924, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct 11 05:12:15.278: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:16.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5660" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:16.418 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:18.700: INFO: Only supported for providers [openstack] (not aws)
... skipping 62 lines ...
• [SLOW TEST:5.102 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":9,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:19.010: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:20.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-4021" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:20.666: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 135 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:14.118 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":6,"skipped":24,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:29.896: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 43 lines ...
Oct 11 05:12:10.167: INFO: PersistentVolumeClaim pvc-bsdgd found but phase is Pending instead of Bound.
Oct 11 05:12:12.403: INFO: PersistentVolumeClaim pvc-bsdgd found and phase=Bound (6.942359448s)
Oct 11 05:12:12.403: INFO: Waiting up to 3m0s for PersistentVolume local-4qlxs to have phase Bound
Oct 11 05:12:12.638: INFO: PersistentVolume local-4qlxs found and phase=Bound (234.709297ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pchp
STEP: Creating a pod to test subpath
Oct 11 05:12:13.346: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pchp" in namespace "provisioning-2568" to be "Succeeded or Failed"
Oct 11 05:12:13.582: INFO: Pod "pod-subpath-test-preprovisionedpv-pchp": Phase="Pending", Reason="", readiness=false. Elapsed: 235.272647ms
Oct 11 05:12:15.817: INFO: Pod "pod-subpath-test-preprovisionedpv-pchp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470330374s
Oct 11 05:12:18.052: INFO: Pod "pod-subpath-test-preprovisionedpv-pchp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.705445062s
Oct 11 05:12:20.289: INFO: Pod "pod-subpath-test-preprovisionedpv-pchp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.942627951s
STEP: Saw pod success
Oct 11 05:12:20.289: INFO: Pod "pod-subpath-test-preprovisionedpv-pchp" satisfied condition "Succeeded or Failed"
Oct 11 05:12:20.527: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-pchp container test-container-subpath-preprovisionedpv-pchp: <nil>
STEP: delete the pod
Oct 11 05:12:21.097: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pchp to disappear
Oct 11 05:12:21.334: INFO: Pod pod-subpath-test-preprovisionedpv-pchp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pchp
Oct 11 05:12:21.335: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pchp" in namespace "provisioning-2568"
STEP: Creating pod pod-subpath-test-preprovisionedpv-pchp
STEP: Creating a pod to test subpath
Oct 11 05:12:21.829: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pchp" in namespace "provisioning-2568" to be "Succeeded or Failed"
Oct 11 05:12:22.064: INFO: Pod "pod-subpath-test-preprovisionedpv-pchp": Phase="Pending", Reason="", readiness=false. Elapsed: 234.776896ms
Oct 11 05:12:24.301: INFO: Pod "pod-subpath-test-preprovisionedpv-pchp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.471593445s
STEP: Saw pod success
Oct 11 05:12:24.301: INFO: Pod "pod-subpath-test-preprovisionedpv-pchp" satisfied condition "Succeeded or Failed"
Oct 11 05:12:24.536: INFO: Trying to get logs from node ip-172-20-45-252.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-pchp container test-container-subpath-preprovisionedpv-pchp: <nil>
STEP: delete the pod
Oct 11 05:12:25.022: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pchp to disappear
Oct 11 05:12:25.258: INFO: Pod pod-subpath-test-preprovisionedpv-pchp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pchp
Oct 11 05:12:25.258: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pchp" in namespace "provisioning-2568"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":32,"failed":1,"failures":["[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:31.551: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
• [SLOW TEST:13.974 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":8,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:32.374: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 73 lines ...
• [SLOW TEST:19.891 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":6,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 60 lines ...
Oct 11 05:11:47.248: INFO: PersistentVolumeClaim csi-hostpathbtnsr found but phase is Pending instead of Bound.
Oct 11 05:11:49.494: INFO: PersistentVolumeClaim csi-hostpathbtnsr found but phase is Pending instead of Bound.
Oct 11 05:11:51.740: INFO: PersistentVolumeClaim csi-hostpathbtnsr found but phase is Pending instead of Bound.
Oct 11 05:11:53.996: INFO: PersistentVolumeClaim csi-hostpathbtnsr found and phase=Bound (20.478919337s)
STEP: Creating pod pod-subpath-test-dynamicpv-xf6t
STEP: Creating a pod to test subpath
Oct 11 05:11:54.739: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xf6t" in namespace "provisioning-3018" to be "Succeeded or Failed"
Oct 11 05:11:54.994: INFO: Pod "pod-subpath-test-dynamicpv-xf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 254.815046ms
Oct 11 05:11:57.239: INFO: Pod "pod-subpath-test-dynamicpv-xf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500611642s
Oct 11 05:11:59.486: INFO: Pod "pod-subpath-test-dynamicpv-xf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.747116717s
Oct 11 05:12:01.735: INFO: Pod "pod-subpath-test-dynamicpv-xf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.995848573s
Oct 11 05:12:03.983: INFO: Pod "pod-subpath-test-dynamicpv-xf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 9.243896799s
Oct 11 05:12:06.230: INFO: Pod "pod-subpath-test-dynamicpv-xf6t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.490777235s
STEP: Saw pod success
Oct 11 05:12:06.230: INFO: Pod "pod-subpath-test-dynamicpv-xf6t" satisfied condition "Succeeded or Failed"
Oct 11 05:12:06.477: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-xf6t container test-container-subpath-dynamicpv-xf6t: <nil>
STEP: delete the pod
Oct 11 05:12:06.998: INFO: Waiting for pod pod-subpath-test-dynamicpv-xf6t to disappear
Oct 11 05:12:07.247: INFO: Pod pod-subpath-test-dynamicpv-xf6t no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xf6t
Oct 11 05:12:07.248: INFO: Deleting pod "pod-subpath-test-dynamicpv-xf6t" in namespace "provisioning-3018"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":77,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:8.860 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":9,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:41.275: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 139 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:436
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":10,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:42.293: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 187 lines ...
Oct 11 05:12:09.287: INFO: PersistentVolumeClaim pvc-9x8nt found but phase is Pending instead of Bound.
Oct 11 05:12:11.533: INFO: PersistentVolumeClaim pvc-9x8nt found and phase=Bound (15.97063985s)
Oct 11 05:12:11.533: INFO: Waiting up to 3m0s for PersistentVolume aws-fc8dc to have phase Bound
Oct 11 05:12:11.778: INFO: PersistentVolume aws-fc8dc found and phase=Bound (245.079657ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-pmhx
STEP: Creating a pod to test exec-volume-test
Oct 11 05:12:12.516: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-pmhx" in namespace "volume-8978" to be "Succeeded or Failed"
Oct 11 05:12:12.762: INFO: Pod "exec-volume-test-preprovisionedpv-pmhx": Phase="Pending", Reason="", readiness=false. Elapsed: 245.976537ms
Oct 11 05:12:15.009: INFO: Pod "exec-volume-test-preprovisionedpv-pmhx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492800462s
Oct 11 05:12:17.255: INFO: Pod "exec-volume-test-preprovisionedpv-pmhx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.739552611s
Oct 11 05:12:19.501: INFO: Pod "exec-volume-test-preprovisionedpv-pmhx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.985552888s
Oct 11 05:12:21.754: INFO: Pod "exec-volume-test-preprovisionedpv-pmhx": Phase="Pending", Reason="", readiness=false. Elapsed: 9.237774257s
Oct 11 05:12:24.004: INFO: Pod "exec-volume-test-preprovisionedpv-pmhx": Phase="Pending", Reason="", readiness=false. Elapsed: 11.487621605s
Oct 11 05:12:26.269: INFO: Pod "exec-volume-test-preprovisionedpv-pmhx": Phase="Pending", Reason="", readiness=false. Elapsed: 13.752984694s
Oct 11 05:12:28.518: INFO: Pod "exec-volume-test-preprovisionedpv-pmhx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.002354702s
STEP: Saw pod success
Oct 11 05:12:28.519: INFO: Pod "exec-volume-test-preprovisionedpv-pmhx" satisfied condition "Succeeded or Failed"
Oct 11 05:12:28.786: INFO: Trying to get logs from node ip-172-20-43-95.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-pmhx container exec-container-preprovisionedpv-pmhx: <nil>
STEP: delete the pod
Oct 11 05:12:29.318: INFO: Waiting for pod exec-volume-test-preprovisionedpv-pmhx to disappear
Oct 11 05:12:29.584: INFO: Pod exec-volume-test-preprovisionedpv-pmhx no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-pmhx
Oct 11 05:12:29.584: INFO: Deleting pod "exec-volume-test-preprovisionedpv-pmhx" in namespace "volume-8978"
STEP: Deleting pv and pvc
Oct 11 05:12:29.837: INFO: Deleting PersistentVolumeClaim "pvc-9x8nt"
Oct 11 05:12:30.111: INFO: Deleting PersistentVolume "aws-fc8dc"
Oct 11 05:12:30.723: INFO: Couldn't delete PD "aws://ap-south-1a/vol-08da9e0d45aca840f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08da9e0d45aca840f is currently attached to i-0b1bd823a638cd80d
	status code: 400, request id: dec17ff5-4e68-4bc7-9760-d79ea174d928
Oct 11 05:12:36.783: INFO: Couldn't delete PD "aws://ap-south-1a/vol-08da9e0d45aca840f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08da9e0d45aca840f is currently attached to i-0b1bd823a638cd80d
	status code: 400, request id: 3a8ea3c1-5f62-4121-980b-60e16d839652
Oct 11 05:12:42.896: INFO: Successfully deleted PD "aws://ap-south-1a/vol-08da9e0d45aca840f".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:42.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8978" for this suite.
... skipping 19 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:43.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":7,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:43.538: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:43.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-761" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":11,"skipped":51,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 11 05:12:44.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1091" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":10,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:46.767: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 41 lines ...
Oct 11 05:12:38.396: INFO: PersistentVolumeClaim pvc-ppcg9 found but phase is Pending instead of Bound.
Oct 11 05:12:40.637: INFO: PersistentVolumeClaim pvc-ppcg9 found and phase=Bound (13.677855333s)
Oct 11 05:12:40.637: INFO: Waiting up to 3m0s for PersistentVolume local-kb59f to have phase Bound
Oct 11 05:12:40.895: INFO: PersistentVolume local-kb59f found and phase=Bound (257.421596ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-fmhf
STEP: Creating a pod to test exec-volume-test
Oct 11 05:12:41.610: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-fmhf" in namespace "volume-2945" to be "Succeeded or Failed"
Oct 11 05:12:41.849: INFO: Pod "exec-volume-test-preprovisionedpv-fmhf": Phase="Pending", Reason="", readiness=false. Elapsed: 238.884557ms
Oct 11 05:12:44.094: INFO: Pod "exec-volume-test-preprovisionedpv-fmhf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.484311547s
STEP: Saw pod success
Oct 11 05:12:44.094: INFO: Pod "exec-volume-test-preprovisionedpv-fmhf" satisfied condition "Succeeded or Failed"
Oct 11 05:12:44.333: INFO: Trying to get logs from node ip-172-20-42-144.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-fmhf container exec-container-preprovisionedpv-fmhf: <nil>
STEP: delete the pod
Oct 11 05:12:44.831: INFO: Waiting for pod exec-volume-test-preprovisionedpv-fmhf to disappear
Oct 11 05:12:45.070: INFO: Pod exec-volume-test-preprovisionedpv-fmhf no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-fmhf
Oct 11 05:12:45.070: INFO: Deleting pod "exec-volume-test-preprovisionedpv-fmhf" in namespace "volume-2945"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:48.126: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:983
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":4,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:48.226: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver csi-hostpath doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":30,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:12:43.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 11 05:12:44.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f85217e-7576-4251-a9b6-fa801cda843b" in namespace "downward-api-5039" to be "Succeeded or Failed"
Oct 11 05:12:45.129: INFO: Pod "downwardapi-volume-4f85217e-7576-4251-a9b6-fa801cda843b": Phase="Pending", Reason="", readiness=false. Elapsed: 245.923966ms
Oct 11 05:12:47.376: INFO: Pod "downwardapi-volume-4f85217e-7576-4251-a9b6-fa801cda843b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.492665567s
STEP: Saw pod success
Oct 11 05:12:47.376: INFO: Pod "downwardapi-volume-4f85217e-7576-4251-a9b6-fa801cda843b" satisfied condition "Succeeded or Failed"
Oct 11 05:12:47.622: INFO: Trying to get logs from node ip-172-20-43-95.ap-south-1.compute.internal pod downwardapi-volume-4f85217e-7576-4251-a9b6-fa801cda843b container client-container: <nil>
STEP: delete the pod
Oct 11 05:12:48.120: INFO: Waiting for pod downwardapi-volume-4f85217e-7576-4251-a9b6-fa801cda843b to disappear
Oct 11 05:12:48.368: INFO: Pod downwardapi-volume-4f85217e-7576-4251-a9b6-fa801cda843b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":8,"skipped":37,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 11 05:12:53.062: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":30,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 11 05:12:48.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 11 05:12:50.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba30861a-e793-46a5-8b63-62da29426030" in namespace "downward-api-354" to be "Succeeded or Failed"
Oct 11 05:12:50.629: INFO: Pod "downwardapi-volume-ba30861a-e793-46a5-8b63-62da29426030": Phase="Pending", Reason="", readiness=false. Elapsed: 249.290537ms
Oct 11 05:12:52.876: INFO: Pod "downwardapi-volume-ba30861a-e793-46a5-8b63-62da29426030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.496335127s
STEP: Saw pod success
Oct 11 05:12:52.876: INFO: Pod "downwardapi-volume-ba30861a-e793-46a5-8b63-62da29426030" satisfied condition "Succeeded or Failed"
Oct 11 05:12:53.122: INFO: Trying to get logs from node ip-172-20-33-34.ap-south-1.compute.internal pod downwardapi-volume-ba30861a-e793-46a5-8b63-62da29426030 container client-container: <nil>
STEP: delete the pod
Oct 11 05:12:53.620: INFO: Waiting for pod downwardapi-volume-ba30861a-e793-46a5-8b63-62da29426030 to disappear
Oct 11 05:12:53.866: INFO: Pod downwardapi-volume-ba30861a-e793-46a5-8b63-62da29426030 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.459 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
Oct 11 05:12:45.744: INFO: Running '/tmp/kubectl2452000220/kubectl --server=https://api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-671 cluster-info dump'
Oct 11 05:12:55.461: INFO: stderr: ""
Oct 11 05:12:55.464: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"10599\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-33-34.ap-south-1.compute.internal\",\n                \"uid\": \"da3dc7f0-20d6-4944-a753-df146d4141a0\",\n                \"resourceVersion\": \"10342\",\n                \"creationTimestamp\": \"2021-10-11T05:06:09Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-south-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-south-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ap-south-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-33-34.ap-south-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-33-34.ap-south-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ap-south-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-south-1a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-south-1a/i-0fa79d1cf19aa88af\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3764932Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3662532Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:06:12Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:12Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:39Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:09Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:39Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:09Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:39Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:09Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:39Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:19Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.33.34\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"13.235.245.56\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-33-34.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-33-34.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-13-235-245-56.ap-south-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec29621daf83ff588df19e70f2e6ec97\",\n                    \"systemUUID\": \"ec29621d-af83-ff58-8df1-9e70f2e6ec97\",\n                    \"bootID\": \"dd8240ee-bc84-4886-9b45-bd98192525bb\",\n                    \"kernelVersion\": \"4.18.0-305.12.1.el8_4.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux 8.4 (Ootpa)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.11\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:06e4235e95299b1d6d595c5ef4c41a9b12641f6683136c18394b858967cd1506\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799606\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 13995876\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"b3ffab73-39b9-414e-b66e-78830022c11c\",\n                \"resourceVersion\": \"2214\",\n                \"creationTimestamp\": \"2021-10-11T05:04:02Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-south-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-south-1a\",\n                    \"kops.k8s.io/instancegroup\": \"master-ap-south-1a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.kubernetes.io/region\": \"ap-south-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-south-1a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-south-1a/i-0c28686e78faa9b57\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3613392Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3510992Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:04:32Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:04:32Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:09:32Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:04:02Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:09:32Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:04:02Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:09:32Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:04:02Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:09:32Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:04:25Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.34.237\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"13.233.106.39\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-13-233-106-39.ap-south-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2bc99a493a236835b3ccee9a048468\",\n                    \"systemUUID\": \"ec2bc99a-493a-2368-35b3-ccee9a048468\",\n                    \"bootID\": \"9da28971-fa5c-40b7-aa72-9e721dbe4e0f\",\n                    \"kernelVersion\": \"4.18.0-305.12.1.el8_4.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux 8.4 (Ootpa)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.11\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:36a55fd68b835aace1533cb4310f464a7f02d884fd7d4c3f528775e39f6bdb9f\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20211007\"\n                        ],\n                        \"sizeBytes\": 173913880\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 127101402\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 121137987\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.2\"\n                        ],\n                        \"sizeBytes\": 116604438\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.2\"\n                        ],\n                        \"sizeBytes\": 115734039\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 52099384\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.2\"\n                        ],\n                        \"sizeBytes\": 27997724\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-42-144.ap-south-1.compute.internal\",\n                \"uid\": \"f1373ab1-2f0f-424f-b4ea-51087af428ec\",\n                \"resourceVersion\": \"10569\",\n                \"creationTimestamp\": \"2021-10-11T05:06:01Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-south-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-south-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ap-south-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-42-144.ap-south-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-42-144.ap-south-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ap-south-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-south-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-5169\\\":\\\"ip-172-20-42-144.ap-south-1.compute.internal\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-south-1a/i-0d99681aaa2650ee3\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3764940Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3662540Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:06:02Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:02Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:31Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:01Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:31Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:01Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:31Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:01Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:31Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:11Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.42.144\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"3.109.62.133\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-42-144.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-42-144.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-3-109-62-133.ap-south-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2a62a6cf6ac8f79ebd4836cfead090\",\n                    \"systemUUID\": \"ec2a62a6-cf6a-c8f7-9ebd-4836cfead090\",\n                    \"bootID\": \"66886a2b-a029-46ff-b8ac-fb3abefb6bc3\",\n                    \"kernelVersion\": \"4.18.0-305.12.1.el8_4.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux 8.4 (Ootpa)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.11\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2\",\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs:1.2\"\n                        ],\n                        \"sizeBytes\": 95843946\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:06e4235e95299b1d6d595c5ef4c41a9b12641f6683136c18394b858967cd1506\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799606\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 13995876\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-43-95.ap-south-1.compute.internal\",\n                \"uid\": \"581a3a3a-5ec9-40a8-983a-1bdd896dbf0e\",\n                \"resourceVersion\": \"10073\",\n                \"creationTimestamp\": \"2021-10-11T05:06:08Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-south-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-south-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ap-south-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-43-95.ap-south-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-43-95.ap-south-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ap-south-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-south-1a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-south-1a/i-0b1bd823a638cd80d\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3764932Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3662532Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:06:12Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:12Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:28Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:08Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:28Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:08Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:28Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:08Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:28Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:18Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.43.95\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"3.108.218.97\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-43-95.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-43-95.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-3-108-218-97.ap-south-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2524f3102011d2b9353f14d1ea967a\",\n                    \"systemUUID\": \"ec2524f3-1020-11d2-b935-3f14d1ea967a\",\n                    \"bootID\": \"ee09656e-6331-4c1a-9190-59884a4fdf98\",\n                    \"kernelVersion\": \"4.18.0-305.12.1.el8_4.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux 8.4 (Ootpa)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.11\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:06e4235e95299b1d6d595c5ef4c41a9b12641f6683136c18394b858967cd1506\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799606\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a\",\n                            \"k8s.gcr.io/e2e-test-images/nautilus:1.4\"\n                        ],\n                        \"sizeBytes\": 49230179\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 13995876\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0\",\n                            \"docker.io/library/busybox:1.27\"\n                        ],\n                        \"sizeBytes\": 720019\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-45-252.ap-south-1.compute.internal\",\n                \"uid\": \"03f1539f-a06d-4014-9b60-99693502d14c\",\n                \"resourceVersion\": \"10564\",\n                \"creationTimestamp\": \"2021-10-11T05:05:55Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-south-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-south-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ap-south-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-45-252.ap-south-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-45-252.ap-south-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ap-south-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-south-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-volume-expand-4028\\\":\\\"ip-172-20-45-252.ap-south-1.compute.internal\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-south-1a/i-02f3e01374cb7e2c6\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3764932Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3662532Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:06:02Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:02Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:45Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:05:55Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:45Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:05:55Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:45Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:05:55Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-10-11T05:12:45Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:05Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.45.252\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"13.126.42.206\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-13-126-42-206.ap-south-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec278c3bab098bc7253eb444899cb5a6\",\n                    \"systemUUID\": \"ec278c3b-ab09-8bc7-253e-b444899cb5a6\",\n                    \"bootID\": \"011af8bb-cb54-4c54-920b-25a98223b7c9\",\n                    \"kernelVersion\": \"4.18.0-305.12.1.el8_4.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux 8.4 (Ootpa)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.11\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2\",\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs:1.2\"\n                        ],\n                        \"sizeBytes\": 95843946\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:06e4235e95299b1d6d595c5ef4c41a9b12641f6683136c18394b858967cd1506\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799606\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a\",\n                            \"k8s.gcr.io/e2e-test-images/nautilus:1.4\"\n                        ],\n                        \"sizeBytes\": 49230179\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b\",\n                            \"k8s.gcr.io/e2e-test-images/nonroot:1.1\"\n                        ],\n                        \"sizeBytes\": 17748448\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\"\n                        ],\n                        \"sizeBytes\": 15209393\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 13707249\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0\",\n                            \"docker.io/library/busybox:1.27\"\n                        ],\n                        \"sizeBytes\": 720019\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"3925\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5cr6n.16ace18f6757e90e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bac13d78-bf90-4a64-8950-57612f2518c6\",\n                \"resourceVersion\": \"98\",\n                \"creationTimestamp\": \"2021-10-11T05:06:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5cr6n\",\n                \"uid\": \"bd31bf47-020c-422e-89de-db6d29bfcc8c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"750\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-5cr6n to ip-172-20-45-252.ap-south-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5cr6n.16ace18f850f3777\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5a1efb0f-0b51-4302-a0b8-81dabcd4e751\",\n                \"resourceVersion\": \"177\",\n                \"creationTimestamp\": \"2021-10-11T05:06:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5cr6n\",\n                \"uid\": \"bd31bf47-020c-422e-89de-db6d29bfcc8c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"755\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:11Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5cr6n.16ace1901ca6a317\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0d1205ca-1076-4af7-85fb-889695d8016d\",\n                \"resourceVersion\": \"185\",\n                \"creationTimestamp\": \"2021-10-11T05:06:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5cr6n\",\n                \"uid\": \"bd31bf47-020c-422e-89de-db6d29bfcc8c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"755\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 2.54326888s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5cr6n.16ace1901e6ac768\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1c94d8b1-e033-4815-b7c8-11110bba0c36\",\n                \"resourceVersion\": \"187\",\n                \"creationTimestamp\": \"2021-10-11T05:06:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5cr6n\",\n                \"uid\": \"bd31bf47-020c-422e-89de-db6d29bfcc8c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"755\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5cr6n.16ace190235fa5c9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1f988ff5-401e-4633-a962-32a37ac3dad6\",\n                \"resourceVersion\": \"189\",\n                \"creationTimestamp\": \"2021-10-11T05:06:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5cr6n\",\n                \"uid\": \"bd31bf47-020c-422e-89de-db6d29bfcc8c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"755\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:14Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5cr6n.16ace19025fd09e2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"74534a24-cc74-44f7-b8a2-6814ff65dbee\",\n                \"resourceVersion\": \"191\",\n                \"creationTimestamp\": \"2021-10-11T05:06:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5cr6n\",\n                \"uid\": \"bd31bf47-020c-422e-89de-db6d29bfcc8c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"755\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Readiness probe failed: HTTP probe failed with statuscode: 503\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:14Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:14Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-dv6ss.16ace176da502f7a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9943debf-e0b8-4518-9cc7-9fed93433df0\",\n                \"resourceVersion\": \"79\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-dv6ss\",\n                \"uid\": \"fbc87bea-78f5-48ef-b94c-d45852511a13\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"425\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:32Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-dv6ss.16ace18bd3dd6ba2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8c865165-6bf3-49cd-a4ae-3dda0131e2c3\",\n                \"resourceVersion\": \"85\",\n                \"creationTimestamp\": \"2021-10-11T05:05:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-dv6ss\",\n                \"uid\": \"fbc87bea-78f5-48ef-b94c-d45852511a13\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"443\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:55Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:55Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-dv6ss.16ace18e33f37d17\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dd4b7c56-cc97-4b47-b813-48364916b3ad\",\n                \"resourceVersion\": \"89\",\n                \"creationTimestamp\": \"2021-10-11T05:06:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-dv6ss\",\n                \"uid\": \"fbc87bea-78f5-48ef-b94c-d45852511a13\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"678\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-dv6ss to ip-172-20-45-252.ap-south-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:05Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-dv6ss.16ace18e8d45de8c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f1976a7a-2d1e-4a6c-ac43-1968c48fc02e\",\n                \"resourceVersion\": \"169\",\n                \"creationTimestamp\": \"2021-10-11T05:06:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-dv6ss\",\n                \"uid\": \"fbc87bea-78f5-48ef-b94c-d45852511a13\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"718\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:07Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:07Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-dv6ss.16ace18f3980c6f4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"736a1166-662b-4647-b1e7-b10db3602506\",\n                \"resourceVersion\": \"95\",\n                \"creationTimestamp\": \"2021-10-11T05:06:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-dv6ss\"\n            },\n            \"reason\": \"TaintManagerEviction\",\n            \"message\": \"Cancelling deletion of Pod kube-system/coredns-5dc785954d-dv6ss\",\n            \"source\": {\n                \"component\": \"taint-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-dv6ss.16ace18ff8bd71d1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"042bf221-aaf9-47fd-accd-eabd8622b1f4\",\n                \"resourceVersion\": \"179\",\n                \"creationTimestamp\": \"2021-10-11T05:06:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-dv6ss\",\n                \"uid\": \"fbc87bea-78f5-48ef-b94c-d45852511a13\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"718\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 6.097951799s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-dv6ss.16ace19000c518c4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2c46380c-c9bc-4bce-a592-e8969e916202\",\n                \"resourceVersion\": \"181\",\n                \"creationTimestamp\": \"2021-10-11T05:06:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-dv6ss\",\n                \"uid\": \"fbc87bea-78f5-48ef-b94c-d45852511a13\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"718\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-dv6ss.16ace19005e4aed5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"71afb7a8-4f81-491b-8ff0-71e247922b55\",\n                \"resourceVersion\": \"183\",\n                \"creationTimestamp\": \"2021-10-11T05:06:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-dv6ss\",\n                \"uid\": \"fbc87bea-78f5-48ef-b94c-d45852511a13\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"718\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16ace176d37b3c3b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dae2ea03-25f8-4a19-8515-794c1c904679\",\n                \"resourceVersion\": \"65\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"1a461af7-2e25-4123-be1d-1c4abb6c88f2\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"414\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-dv6ss\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16ace18f66ba841e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0d6c570a-0722-42cc-a0c6-ef5d86a8de10\",\n                \"resourceVersion\": \"97\",\n                \"creationTimestamp\": \"2021-10-11T05:06:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"1a461af7-2e25-4123-be1d-1c4abb6c88f2\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"748\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-5cr6n\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb.16ace176d31a26dd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c96c8c95-132b-4668-ae8b-eff4a6f9a6ed\",\n                \"resourceVersion\": \"78\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb\",\n                \"uid\": \"3b6a78a4-6cb5-4f12-ad12-76fa725278c2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"424\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:32Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb.16ace18bd2f04550\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"44b828ad-69aa-479e-aed7-3e549be444b6\",\n                \"resourceVersion\": \"84\",\n                \"creationTimestamp\": \"2021-10-11T05:05:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb\",\n                \"uid\": \"3b6a78a4-6cb5-4f12-ad12-76fa725278c2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"441\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:55Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:55Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb.16ace18e340eb2c0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ee61e5f2-6cda-4444-84e6-784e897d10b9\",\n                \"resourceVersion\": \"90\",\n                \"creationTimestamp\": \"2021-10-11T05:06:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb\",\n                \"uid\": \"3b6a78a4-6cb5-4f12-ad12-76fa725278c2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"675\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-84d4cfd89c-vwtvb to ip-172-20-45-252.ap-south-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:05Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb.16ace18e8cc01781\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"568e4e0d-8f21-4f6c-832b-79d7c69e87a7\",\n                \"resourceVersion\": \"167\",\n                \"creationTimestamp\": \"2021-10-11T05:06:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb\",\n                \"uid\": \"3b6a78a4-6cb5-4f12-ad12-76fa725278c2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"717\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:07Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:07Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb.16ace18f3980a7a0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e3173b8d-0dc0-47b5-9c25-91ebb4dbbced\",\n                \"resourceVersion\": \"94\",\n                \"creationTimestamp\": \"2021-10-11T05:06:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb\"\n            },\n            \"reason\": \"TaintManagerEviction\",\n            \"message\": \"Cancelling deletion of Pod kube-system/coredns-autoscaler-84d4cfd89c-vwtvb\",\n            \"source\": {\n                \"component\": \"taint-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb.16ace18f4956cb10\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"30cbb459-e3e1-48ce-b7a4-83e635e9e180\",\n                \"resourceVersion\": \"171\",\n                \"creationTimestamp\": \"2021-10-11T05:06:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb\",\n                \"uid\": \"3b6a78a4-6cb5-4f12-ad12-76fa725278c2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"717\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\" in 3.163979605s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb.16ace18f514aa06b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4f0dfbc6-2e29-4c6e-a6a5-f9b2485bbc6e\",\n                \"resourceVersion\": \"173\",\n                \"creationTimestamp\": \"2021-10-11T05:06:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb\",\n                \"uid\": \"3b6a78a4-6cb5-4f12-ad12-76fa725278c2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"717\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb.16ace18f5699ab31\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c4ab36f0-2a10-4e45-9e19-f87e3b2273d2\",\n                \"resourceVersion\": \"175\",\n                \"creationTimestamp\": \"2021-10-11T05:06:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb\",\n                \"uid\": \"3b6a78a4-6cb5-4f12-ad12-76fa725278c2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"717\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c.16ace176d375a850\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f08ae5f6-b946-45ab-ac43-afd9149d0645\",\n                \"resourceVersion\": \"63\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"uid\": \"bf1f7686-4509-4654-aa7f-e4a8660f99f0\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"416\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-84d4cfd89c-vwtvb\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.16ace176cf5181a9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"76a15c75-cb4f-4301-8b31-566808a9438a\",\n                \"resourceVersion\": \"64\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"dc394fa1-dfe8-4c12-ae15-8e83a7cae7fc\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"238\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-84d4cfd89c to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16ace176ce18eb8f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cb6c496b-c411-485a-bafd-77f49a49bf72\",\n                \"resourceVersion\": \"59\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"558e6af6-3040-4e17-964b-47544dcc96d3\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"231\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16ace18f6625c169\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"63bce86b-858d-4646-b340-f764d4a3b579\",\n                \"resourceVersion\": \"96\",\n                \"creationTimestamp\": \"2021-10-11T05:06:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"558e6af6-3040-4e17-964b-47544dcc96d3\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"747\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"lastTimestamp\": \"2021-10-11T05:06:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-848dc45d58-zmml2.16ace176db5adfba\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e74f314f-6d3a-47b7-8687-02790edb48b4\",\n                \"resourceVersion\": \"77\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-848dc45d58-zmml2\",\n                \"uid\": \"ef4fecb7-bc84-411e-a230-d16436c939c5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"426\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/network-unavailable: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:29Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-848dc45d58-zmml2.16ace178c4b69cf8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"66ef932b-286f-4792-a1b1-1bc89fced05a\",\n                \"resourceVersion\": \"80\",\n                \"creationTimestamp\": \"2021-10-11T05:04:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-848dc45d58-zmml2\",\n                \"uid\": \"ef4fecb7-bc84-411e-a230-d16436c939c5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-848dc45d58-zmml2 to ip-172-20-34-237.ap-south-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:33Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-848dc45d58-zmml2.16ace178dfe848e4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"118d3c6c-dff8-43c3-85d3-b09d4612b218\",\n                \"resourceVersion\": \"81\",\n                \"creationTimestamp\": \"2021-10-11T05:04:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-848dc45d58-zmml2\",\n                \"uid\": \"ef4fecb7-bc84-411e-a230-d16436c939c5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"479\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.22.0-beta.2\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:34Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-848dc45d58-zmml2.16ace178e146b54a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6eba733f-73de-45cd-bfc9-62614c9397e5\",\n                \"resourceVersion\": \"82\",\n                \"creationTimestamp\": \"2021-10-11T05:04:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-848dc45d58-zmml2\",\n                \"uid\": \"ef4fecb7-bc84-411e-a230-d16436c939c5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"479\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:34Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-848dc45d58-zmml2.16ace178e617f23f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"20c7725c-b639-4677-b7ad-19acf37c6d3e\",\n                \"resourceVersion\": \"83\",\n                \"creationTimestamp\": \"2021-10-11T05:04:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-848dc45d58-zmml2\",\n                \"uid\": \"ef4fecb7-bc84-411e-a230-d16436c939c5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"479\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:34Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-848dc45d58.16ace176d35ebf5a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"10f3710e-b438-47a5-97c6-85bc0a528c7e\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-848dc45d58\",\n                \"uid\": \"50d2ee36-f0be-4931-b15f-6f4217d8dca8\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"415\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-848dc45d58-zmml2\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.16ace176cf4e5a22\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f80a07ee-6a71-4c41-abeb-57b0c9ba9008\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"2ed22d2c-5791-44c8-802e-f4803f90f613\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"246\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-848dc45d58 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal.16ace16703aa0d35\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d2821321-8279-484e-8b3e-31811756c963\",\n                \"resourceVersion\": \"20\",\n                \"creationTimestamp\": \"2021-10-11T05:04:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"f2fdba3a9d9fa2d2cc646c701c47a12f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20211007\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:17Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal.16ace169c6e3939a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e3a0e5be-7c47-4b5d-b9b4-4107d0214f7c\",\n                \"resourceVersion\": \"35\",\n                \"creationTimestamp\": \"2021-10-11T05:04:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"f2fdba3a9d9fa2d2cc646c701c47a12f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20211007\\\" in 11.865249317s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:29Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal.16ace16a1be86678\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"168fd347-9916-4908-ae09-dbb4183fa1af\",\n                \"resourceVersion\": \"38\",\n                \"creationTimestamp\": \"2021-10-11T05:04:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"f2fdba3a9d9fa2d2cc646c701c47a12f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal.16ace16a219ecbf5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"50008a3b-0b1d-498a-8f92-eb47b937168b\",\n                \"resourceVersion\": \"39\",\n                \"creationTimestamp\": \"2021-10-11T05:04:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"f2fdba3a9d9fa2d2cc646c701c47a12f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-34-237.ap-south-1.compute.internal.16ace1670bc198ad\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5f6535be-a3d7-46c4-aee8-b04ebc9dde5a\",\n                \"resourceVersion\": \"22\",\n                \"creationTimestamp\": \"2021-10-11T05:04:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"595c00043cbaf5f2df12d8a648379a64\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20211007\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:17Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-34-237.ap-south-1.compute.internal.16ace16a0864064b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d05e3588-1aab-4614-b0f6-bcde5e105cc2\",\n                \"resourceVersion\": \"36\",\n                \"creationTimestamp\": \"2021-10-11T05:04:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"595c00043cbaf5f2df12d8a648379a64\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20211007\\\" in 12.828422202s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-34-237.ap-south-1.compute.internal.16ace16a1bd79f34\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"796dc2a4-a8ec-4052-aeec-6b397ddf32bb\",\n                \"resourceVersion\": \"37\",\n                \"creationTimestamp\": \"2021-10-11T05:04:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"595c00043cbaf5f2df12d8a648379a64\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-34-237.ap-south-1.compute.internal.16ace16a22795a61\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d01c706c-16dd-4ddc-aa18-c02ffa0ef544\",\n                \"resourceVersion\": \"40\",\n                \"creationTimestamp\": \"2021-10-11T05:04:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"595c00043cbaf5f2df12d8a648379a64\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-8gqh5.16ace176cd6c61bd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cbb4b3c8-e673-49c7-8b57-22ddaf042b2c\",\n                \"resourceVersion\": \"58\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-8gqh5\",\n                \"uid\": \"a1b833c4-83cc-4207-b17b-3d04df7a91ea\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"411\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kops-controller-8gqh5 to ip-172-20-34-237.ap-south-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-8gqh5.16ace17722cc1538\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5bf49bc4-562c-432d-9870-d48a3f3bdcd2\",\n                \"resourceVersion\": \"68\",\n                \"creationTimestamp\": \"2021-10-11T05:04:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-8gqh5\",\n                \"uid\": \"a1b833c4-83cc-4207-b17b-3d04df7a91ea\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kops-controller:1.22.0-beta.2\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:26Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-8gqh5.16ace17725250f4e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"75fb3d93-bfb9-4328-95ac-5f3ec375816a\",\n                \"resourceVersion\": \"72\",\n                \"creationTimestamp\": \"2021-10-11T05:04:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-8gqh5\",\n                \"uid\": \"a1b833c4-83cc-4207-b17b-3d04df7a91ea\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:26Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-8gqh5.16ace1772a144249\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b39b3577-fc5a-401f-aecf-075c129abe44\",\n                \"resourceVersion\": \"73\",\n                \"creationTimestamp\": \"2021-10-11T05:04:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-8gqh5\",\n                \"uid\": \"a1b833c4-83cc-4207-b17b-3d04df7a91ea\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:26Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-leader.16ace1774f4c9167\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"27686be0-48a9-49bd-b6fe-76514c6ee902\",\n                \"resourceVersion\": \"74\",\n                \"creationTimestamp\": \"2021-10-11T05:04:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-leader\",\n                \"uid\": \"0ac2511a-1690-4bdd-947a-d581f21c345a\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"457\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-34-237.ap-south-1.compute.internal_d4115244-7d43-4831-98f8-be50941c9571 became leader\",\n            \"source\": {\n                \"component\": \"ip-172-20-34-237.ap-south-1.compute.internal_d4115244-7d43-4831-98f8-be50941c9571\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:27Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller.16ace176cb8eeb86\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c696e87e-a8ee-4e7b-864b-f9abd1080f5e\",\n                \"resourceVersion\": \"57\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller\",\n                \"uid\": \"141f6820-8975-46c0-b87d-7473ca57b2c9\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"218\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kops-controller-8gqh5\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal.16ace166e32a9148\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b5bd6988-7b8d-49d3-8c29-f6fa5c4daa76\",\n                \"resourceVersion\": \"44\",\n                \"creationTimestamp\": \"2021-10-11T05:04:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"951972ef0f3380ccefa22b67d90d7acd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:16Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:43Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal.16ace167fa7efe6f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0410ce2d-e302-480c-b9e6-bbbcad9889b3\",\n                \"resourceVersion\": \"45\",\n                \"creationTimestamp\": \"2021-10-11T05:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"951972ef0f3380ccefa22b67d90d7acd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:43Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal.16ace1680ab03877\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"863e35ce-83be-4d62-84db-b9f3c7c7959c\",\n                \"resourceVersion\": \"46\",\n                \"creationTimestamp\": \"2021-10-11T05:04:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"951972ef0f3380ccefa22b67d90d7acd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:43Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal.16ace1680add0941\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3a44e955-853e-42ca-b669-f871a454f267\",\n                \"resourceVersion\": \"32\",\n                \"creationTimestamp\": \"2021-10-11T05:04:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"951972ef0f3380ccefa22b67d90d7acd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.2\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal.16ace1683eab8e7f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"02efae54-ad2c-4048-8759-aaba06128a94\",\n                \"resourceVersion\": \"33\",\n                \"creationTimestamp\": \"2021-10-11T05:04:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"951972ef0f3380ccefa22b67d90d7acd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:22Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal.16ace16863daa888\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2061d950-5466-405f-adaf-7b16bce2b305\",\n                \"resourceVersion\": \"34\",\n                \"creationTimestamp\": \"2021-10-11T05:04:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"951972ef0f3380ccefa22b67d90d7acd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:23Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-34-237.ap-south-1.compute.internal.16ace1670bc2149f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ae95f577-e3fc-4372-9746-b41f93161651\",\n                \"resourceVersion\": \"49\",\n                \"creationTimestamp\": \"2021-10-11T05:04:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"92d8f0527d5807f3c12dacad7d0dce41\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:17Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:09Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-34-237.ap-south-1.compute.internal.16ace167f902e38c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b8d82125-2408-40c3-9c33-efb0682b1ce0\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-10-11T05:04:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"92d8f0527d5807f3c12dacad7d0dce41\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:09Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-34-237.ap-south-1.compute.internal.16ace16806fe9f3e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f77abaeb-d3ae-4e76-9691-bd71dd13affe\",\n                \"resourceVersion\": \"51\",\n                \"creationTimestamp\": \"2021-10-11T05:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"92d8f0527d5807f3c12dacad7d0dce41\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:09Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-34-237.ap-south-1.compute.internal.16ace16fa5e86836\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"95defc7e-b035-4de9-968e-dc0cf9b86bb2\",\n                \"resourceVersion\": \"48\",\n                \"creationTimestamp\": \"2021-10-11T05:04:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"92d8f0527d5807f3c12dacad7d0dce41\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"BackOff\",\n            \"message\": \"Back-off restarting failed container\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:54Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:55Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.16ace173496f9399\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4b7c492c-2f56-4293-a210-e61400161148\",\n                \"resourceVersion\": \"52\",\n                \"creationTimestamp\": \"2021-10-11T05:04:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"01f4ee0a-2de3-4052-af11-f5a25d2dd1d6\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"263\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-34-237.ap-south-1.compute.internal_624fe808-9597-4d3a-9b6a-c3d784dc0300 became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:10Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns.16ace176c8372346\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4fe65f64-ec02-4144-b2ac-f436ddf84617\",\n                \"resourceVersion\": \"56\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"PodDisruptionBudget\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns\",\n                \"uid\": \"5a6cd22a-ff89-4669-b3b1-d8151584a782\",\n                \"apiVersion\": \"policy/v1\",\n                \"resourceVersion\": \"234\"\n            },\n            \"reason\": \"NoPods\",\n            \"message\": \"No matching pods found\",\n            \"source\": {\n                \"component\": \"controllermanager\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:25Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal.16ace181a23a3437\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a97b3e5a-f1b6-457e-8284-d7dcac294c8d\",\n                \"resourceVersion\": \"213\",\n                \"creationTimestamp\": \"2021-10-11T05:06:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal\",\n                \"uid\": \"c61d2f87510c22e6fd486b7c3b36ea71\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-34.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal.16ace181a53534fd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"16f11565-65d1-44d4-8c35-bb986496398b\",\n                \"resourceVersion\": \"214\",\n                \"creationTimestamp\": \"2021-10-11T05:06:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal\",\n                \"uid\": \"c61d2f87510c22e6fd486b7c3b36ea71\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-34.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal.16ace181aafbe395\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"98c622c3-4a88-43ed-9963-dd4fda3a0153\",\n                \"resourceVersion\": \"215\",\n                \"creationTimestamp\": \"2021-10-11T05:06:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal\",\n                \"uid\": \"c61d2f87510c22e6fd486b7c3b36ea71\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-34.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-34-237.ap-south-1.compute.internal.16ace166e33063e0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f038bb2d-6d32-415a-8df7-371d6b3b646d\",\n                \"resourceVersion\": \"19\",\n                \"creationTimestamp\": \"2021-10-11T05:04:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"bf6025a5cd1ddb4585f52d00767b7dbc\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:16Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-34-237.ap-south-1.compute.internal.16ace167fa4f8e39\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d0b2a388-871f-4e1f-820b-554caa9e9784\",\n                \"resourceVersion\": \"25\",\n                \"creationTimestamp\": \"2021-10-11T05:04:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"bf6025a5cd1ddb4585f52d00767b7dbc\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-34-237.ap-south-1.compute.internal.16ace16809f0c4fc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"396614c5-c51e-4120-a920-465e5997fd23\",\n                \"resourceVersion\": \"30\",\n                \"creationTimestamp\": \"2021-10-11T05:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"bf6025a5cd1ddb4585f52d00767b7dbc\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-42-144.ap-south-1.compute.internal.16ace17fbd226664\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3510c1a2-3129-41f1-b056-7c084460fe58\",\n                \"resourceVersion\": \"129\",\n                \"creationTimestamp\": \"2021-10-11T05:06:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-42-144.ap-south-1.compute.internal\",\n                \"uid\": \"2c78924c96dd03df94010fca1114716a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-42-144.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:03Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-42-144.ap-south-1.compute.internal.16ace17fc0c0070d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"014ac8f5-8176-4ea0-b816-852e8c112f07\",\n                \"resourceVersion\": \"132\",\n                \"creationTimestamp\": \"2021-10-11T05:06:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-42-144.ap-south-1.compute.internal\",\n                \"uid\": \"2c78924c96dd03df94010fca1114716a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-42-144.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:03Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-42-144.ap-south-1.compute.internal.16ace17fcb8ac55a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6cda0148-afa5-485e-bfb1-ef639f70815d\",\n                \"resourceVersion\": \"134\",\n                \"creationTimestamp\": \"2021-10-11T05:06:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-42-144.ap-south-1.compute.internal\",\n                \"uid\": \"2c78924c96dd03df94010fca1114716a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-42-144.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:03Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-43-95.ap-south-1.compute.internal.16ace181846e9e3b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"16d5375a-73db-4c61-a224-506010b3d3fd\",\n                \"resourceVersion\": \"178\",\n                \"creationTimestamp\": \"2021-10-11T05:06:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-43-95.ap-south-1.compute.internal\",\n                \"uid\": \"2644900af7680b3908d1740b43c7c036\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-43-95.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-43-95.ap-south-1.compute.internal.16ace1818684d990\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"eef272db-46f1-46fb-8a1d-988c81c132c2\",\n                \"resourceVersion\": \"180\",\n                \"creationTimestamp\": \"2021-10-11T05:06:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-43-95.ap-south-1.compute.internal\",\n                \"uid\": \"2644900af7680b3908d1740b43c7c036\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-43-95.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-43-95.ap-south-1.compute.internal.16ace1818b395529\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1c8f9b34-1ec1-41e8-8288-3d29b40e5e33\",\n                \"resourceVersion\": \"182\",\n                \"creationTimestamp\": \"2021-10-11T05:06:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-43-95.ap-south-1.compute.internal\",\n                \"uid\": \"2644900af7680b3908d1740b43c7c036\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-43-95.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"lastTimestamp\": \"2021-10-11T05:05:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-45-252.ap-south-1.compute.internal.16ace17e6f45a69b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"898b617c-bc85-4ffd-84a2-4a06c8408044\",\n                \"resourceVersion\": \"130\",\n                \"creationTimestamp\": \"2021-10-11T05:06:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-45-252.ap-south-1.compute.internal\",\n                \"uid\": \"463e8182fa542aac3b016d172aa62aca\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:57Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-45-252.ap-south-1.compute.internal.16ace17e719edcb6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e9290494-7ec0-4344-8f80-cf842afd39c9\",\n                \"resourceVersion\": \"133\",\n                \"creationTimestamp\": \"2021-10-11T05:06:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-45-252.ap-south-1.compute.internal\",\n                \"uid\": \"463e8182fa542aac3b016d172aa62aca\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:58Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-45-252.ap-south-1.compute.internal.16ace17e76081d8a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9474cebc-08c5-4d31-9965-5b6c5bd0c47b\",\n                \"resourceVersion\": \"135\",\n                \"creationTimestamp\": \"2021-10-11T05:06:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-45-252.ap-south-1.compute.internal\",\n                \"uid\": \"463e8182fa542aac3b016d172aa62aca\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-252.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:58Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-34-237.ap-south-1.compute.internal.16ace16703b322ae\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5d1869ea-1c58-4c8e-8aee-38cca0c0bcd4\",\n                \"resourceVersion\": \"21\",\n                \"creationTimestamp\": \"2021-10-11T05:04:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"a71e75371d6b74e1efc358d2cc996735\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:17Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-34-237.ap-south-1.compute.internal.16ace167fa8056d6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5815ae2f-e9fb-4cef-8f0d-be3972de9cdd\",\n                \"resourceVersion\": \"27\",\n                \"creationTimestamp\": \"2021-10-11T05:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"a71e75371d6b74e1efc358d2cc996735\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-34-237.ap-south-1.compute.internal.16ace16807e27c13\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4fcc2b7a-0db3-4696-8fa3-bc2dedf973cf\",\n                \"resourceVersion\": \"29\",\n                \"creationTimestamp\": \"2021-10-11T05:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"uid\": \"a71e75371d6b74e1efc358d2cc996735\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-237.ap-south-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"lastTimestamp\": \"2021-10-11T05:03:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.16ace171145035d4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"355930b1-75c2-4ac3-a160-7c54afde9755\",\n                \"resourceVersion\": \"13\",\n                \"creationTimestamp\": \"2021-10-11T05:04:00Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"1a87be59-8b5a-4f63-8bb9-6eb3dca15437\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"215\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-34-237.ap-south-1.compute.internal_a9b03bdd-7014-4543-9f18-58702dedfb44 became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-10-11T05:04:00Z\",\n            \"lastTimestamp\": \"2021-10-11T05:04:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"10661\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"10665\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"79743a53-a533-459e-b46f-3bf247f28cf9\",\n                \"resourceVersion\": \"233\",\n                \"creationTimestamp\": \"2021-10-11T05:04:01Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"9153\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"kube-dns\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"resourceVersion\\\":\\\"0\\\"},\\\"spec\\\":{\\\"clusterIP\\\":\\\"100.64.0.10\\\",\\\"ports\\\":[{\\\"name\\\":\\\"dns\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}],\\\"selector\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}}}\\n\",\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"100.64.0.10\",\n                \"clusterIPs\": [\n                    \"100.64.0.10\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv4\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"10674\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"141f6820-8975-46c0-b87d-7473ca57b2c9\",\n                \"resourceVersion\": \"461\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-10-11T05:04:01Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"kops-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"version\": \"v1.22.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.2\\\"},\\\"name\\\":\\\"kops-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kops-controller\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"dns.alpha.kubernetes.io/internal\\\":\\\"kops-controller.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.2\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/kops-controller\\\",\\\"--v=2\\\",\\\"--conf=/etc/kubernetes/kops-controller/config/config.yaml\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/kops-controller:1.22.0-beta.2\\\",\\\"name\\\":\\\"kops-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/config/\\\",\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/pki/\\\",\\\"name\\\":\\\"kops-controller-pki\\\"}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kops.k8s.io/kops-controller-pki\\\":\\\"\\\",\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"kops-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node.cloudprovider.kubernetes.io/uninitialized\\\",\\\"operator\\\":\\\"Exists\\\"},{\\\"key\\\":\\\"node.kubernetes.io/not-ready\\\",\\\"operator\\\":\\\"Exists\\\"},{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"kops-controller\\\"},\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/kops-controller/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"kops-controller-pki\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kops-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                            \"k8s-app\": \"kops-controller\",\n                            \"version\": \"v1.22.0-beta.2\"\n                        },\n                        \"annotations\": {\n                            \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"configMap\": {\n                                    \"name\": \"kops-controller\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/kubernetes/kops-controller/\",\n                                    \"type\": \"Directory\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kops-controller\",\n                                \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.2\",\n                                \"command\": [\n                                    \"/kops-controller\",\n                                    \"--v=2\",\n                                    \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kops-controller-config\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                                    },\n                                    {\n                                        \"name\": \"kops-controller-pki\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kops.k8s.io/kops-controller-pki\": \"\",\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"kops-controller\",\n                        \"serviceAccount\": \"kops-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node.cloudprovider.kubernetes.io/uninitialized\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node.kubernetes.io/not-ready\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 1,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 1,\n                \"numberReady\": 1,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 1,\n                \"numberAvailable\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"10689\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"558e6af6-3040-4e17-964b-47544dcc96d3\",\n                \"resourceVersion\": \"815\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-10-11T05:04:01Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"coredns\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":\\\"10%\\\",\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"preferredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"podAffinityTerm\\\":{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kube-dns\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"weight\\\":100}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"-conf\\\",\\\"/etc/coredns/Corefile\\\"],\\\"image\\\":\\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/health\\\",\\\"port\\\":8080,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"coredns\\\",\\\"ports\\\":[{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns\\\",\\\"protocol\\\":\\\"UDP\\\"},{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns-tcp\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":9153,\\\"name\\\":\\\"metrics\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8181,\\\"scheme\\\":\\\"HTTP\\\"}},\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_BIND_SERVICE\\\"],\\\"drop\\\":[\\\"all\\\"]},\\\"readOnlyRootFilesystem\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"Corefile\\\",\\\"path\\\":\\\"Corefile\\\"}],\\\"name\\\":\\\"coredns\\\"},\\\"name\\\":\\\"config-volume\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"10%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 2,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-10-11T05:06:14Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:14Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-10-11T05:06:20Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:04:25Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-5dc785954d\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dc394fa1-dfe8-4c12-ae15-8e83a7cae7fc\",\n                \"resourceVersion\": \"762\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-10-11T05:04:02Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"kubernetes.io/cluster-service\": \"true\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"coredns-autoscaler\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"},\\\"name\\\":\\\"coredns-autoscaler\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/cluster-proportional-autoscaler\\\",\\\"--namespace=kube-system\\\",\\\"--configmap=coredns-autoscaler\\\",\\\"--target=Deployment/coredns\\\",\\\"--default-params={\\\\\\\"linear\\\\\\\":{\\\\\\\"coresPerReplica\\\\\\\":256,\\\\\\\"nodesPerReplica\\\\\\\":16,\\\\\\\"preventSinglePointFailure\\\\\\\":true}}\\\",\\\"--logtostderr=true\\\",\\\"--v=2\\\"],\\\"image\\\":\\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\",\\\"name\\\":\\\"autoscaler\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"10Mi\\\"}}}],\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns-autoscaler\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": \"25%\",\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-10-11T05:06:11Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:06:11Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-10-11T05:06:11Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:04:25Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-autoscaler-84d4cfd89c\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2ed22d2c-5791-44c8-802e-f4803f90f613\",\n                \"resourceVersion\": \"484\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-10-11T05:04:03Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"dns-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"version\": \"v1.22.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.2\\\"},\\\"name\\\":\\\"dns-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"dns-controller\\\"}},\\\"strategy\\\":{\\\"type\\\":\\\"Recreate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.2\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/dns-controller\\\",\\\"--watch-ingress=false\\\",\\\"--dns=aws-route53\\\",\\\"--zone=*/ZEMLNXIIWQ0RV\\\",\\\"--zone=*/*\\\",\\\"-v=2\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/dns-controller:1.22.0-beta.2\\\",\\\"name\\\":\\\"dns-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true}}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"dns-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node.cloudprovider.kubernetes.io/uninitialized\\\",\\\"operator\\\":\\\"Exists\\\"},{\\\"key\\\":\\\"node.kubernetes.io/not-ready\\\",\\\"operator\\\":\\\"Exists\\\"},{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"version\": \"v1.22.0-beta.2\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.2\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node.cloudprovider.kubernetes.io/uninitialized\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node.kubernetes.io/not-ready\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"Recreate\"\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-10-11T05:04:34Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:04:34Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-10-11T05:04:34Z\",\n                        \"lastTransitionTime\": \"2021-10-11T05:04:25Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"dns-controller-848dc45d58\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"10696\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1a461af7-2e25-4123-be1d-1c4abb6c88f2\",\n                \"resourceVersion\": \"814\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"558e6af6-3040-4e17-964b-47544dcc96d3\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"5dc785954d\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"5dc785954d\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 2\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bf1f7686-4509-4654-aa7f-e4a8660f99f0\",\n                \"resourceVersion\": \"761\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns-autoscaler\",\n                        \"uid\": \"dc394fa1-dfe8-4c12-ae15-8e83a7cae7fc\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\",\n                        \"pod-template-hash\": \"84d4cfd89c\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\",\n                            \"pod-template-hash\": \"84d4cfd89c\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-848dc45d58\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"50d2ee36-f0be-4931-b15f-6f4217d8dca8\",\n                \"resourceVersion\": \"483\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"848dc45d58\",\n                    \"version\": \"v1.22.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"1\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"dns-controller\",\n                        \"uid\": \"2ed22d2c-5791-44c8-802e-f4803f90f613\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\",\n                        \"pod-template-hash\": \"848dc45d58\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"pod-template-hash\": \"848dc45d58\",\n                            \"version\": \"v1.22.0-beta.2\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.2\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node.cloudprovider.kubernetes.io/uninitialized\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node.kubernetes.io/not-ready\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"10708\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5cr6n\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bd31bf47-020c-422e-89de-db6d29bfcc8c\",\n                \"resourceVersion\": \"810\",\n                \"creationTimestamp\": \"2021-10-11T05:06:10Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"1a461af7-2e25-4123-be1d-1c4abb6c88f2\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-bcwfc\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-bcwfc\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-45-252.ap-south-1.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:10Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:20Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:20Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:10Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.45.252\",\n                \"podIP\": \"100.96.1.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.1.4\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:06:10Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:06:14Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                        \"containerID\": \"containerd://55cb75cbd5ac223b76b3e966e5acc1858c8288a572e3f1eb5d1a2fb4f7d6a5d8\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-dv6ss\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fbc87bea-78f5-48ef-b94c-d45852511a13\",\n                \"resourceVersion\": \"783\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"1a461af7-2e25-4123-be1d-1c4abb6c88f2\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-h8ckj\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-h8ckj\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-45-252.ap-south-1.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:05Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:14Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:14Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:05Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.45.252\",\n                \"podIP\": \"100.96.1.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.1.3\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:06:05Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:06:13Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                        \"containerID\": \"containerd://5f93d9b336f85312373042c1fc5907b64dd7025a87952a34fc9c5b133c81bf6f\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-vwtvb\",\n                \"generateName\": \"coredns-autoscaler-84d4cfd89c-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3b6a78a4-6cb5-4f12-ad12-76fa725278c2\",\n                \"resourceVersion\": \"760\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                        \"uid\": \"bf1f7686-4509-4654-aa7f-e4a8660f99f0\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-8dwk8\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                        \"command\": [\n                            \"/cluster-proportional-autoscaler\",\n                            \"--namespace=kube-system\",\n                            \"--configmap=coredns-autoscaler\",\n                            \"--target=Deployment/coredns\",\n                            \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                            \"--logtostderr=true\",\n                            \"--v=2\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"20m\",\n                                \"memory\": \"10Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-8dwk8\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns-autoscaler\",\n                \"serviceAccount\": \"coredns-autoscaler\",\n                \"nodeName\": \"ip-172-20-45-252.ap-south-1.compute.internal\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:05Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:11Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:11Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:06:05Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.45.252\",\n                \"podIP\": \"100.96.1.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.1.2\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:06:05Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:06:10Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                        \"containerID\": \"containerd://66d56e161fabaaa52282f2dfde3c83aad7deada9a285a02acd4fc0de8e8d84cb\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-848dc45d58-zmml2\",\n                \"generateName\": \"dns-controller-848dc45d58-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ef4fecb7-bc84-411e-a230-d16436c939c5\",\n                \"resourceVersion\": \"482\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"848dc45d58\",\n                    \"version\": \"v1.22.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"dns-controller-848dc45d58\",\n                        \"uid\": \"50d2ee36-f0be-4931-b15f-6f4217d8dca8\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-p5cfz\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.2\",\n                        \"command\": [\n                            \"/dns-controller\",\n                            \"--watch-ingress=false\",\n                            \"--dns=aws-route53\",\n                            \"--zone=*/ZEMLNXIIWQ0RV\",\n                            \"--zone=*/*\",\n                            \"-v=2\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-p5cfz\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"dns-controller\",\n                \"serviceAccount\": \"dns-controller\",\n                \"nodeName\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"node.cloudprovider.kubernetes.io/uninitialized\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:33Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:34Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:34Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:33Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.237\",\n                \"podIP\": \"172.20.34.237\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.34.237\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:04:33Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:04:34Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.2\",\n                        \"imageID\": \"sha256:e97959a73cc59c881f2b162dd52c6dd74c8f951465fc93a427ef4dc19f85d6e0\",\n                        \"containerID\": \"containerd://9157f78e4358c4ee3efe58b4e27f22455e51c867cde304cd78e7cc772042b0d0\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"799e79f0-79b4-4f99-93d2-800f601fd9d6\",\n                \"resourceVersion\": \"550\",\n                \"creationTimestamp\": \"2021-10-11T05:04:58Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-events\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"f2fdba3a9d9fa2d2cc646c701c47a12f\",\n                    \"kubernetes.io/config.mirror\": \"f2fdba3a9d9fa2d2cc646c701c47a12f\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:03:01.119043243Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                        \"uid\": \"b3ffab73-39b9-414e-b66e-78830022c11c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-events\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd-events.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20211007\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events --client-urls=https://__name__:4002 --cluster-name=etcd-events --containerized=true --dns-suffix=.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --grpc-port=3997 --peer-urls=https://__name__:2381 --quarantine-client-urls=https://__name__:3995 --v=6 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:31Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:31Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.237\",\n                \"podIP\": \"172.20.34.237\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.34.237\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:03:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:03:30Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20211007\",\n                        \"imageID\": \"k8s.gcr.io/etcdadm/etcd-manager@sha256:36a55fd68b835aace1533cb4310f464a7f02d884fd7d4c3f528775e39f6bdb9f\",\n                        \"containerID\": \"containerd://e88dfacbbee2f78e93e71ccd24097b535c75507f6b7b5e535a41b018c39e0005\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c45dc365-4b8a-4aa3-a20c-e1350e4a4efd\",\n                \"resourceVersion\": \"502\",\n                \"creationTimestamp\": \"2021-10-11T05:04:39Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-main\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"595c00043cbaf5f2df12d8a648379a64\",\n                    \"kubernetes.io/config.mirror\": \"595c00043cbaf5f2df12d8a648379a64\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:03:01.119061830Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                        \"uid\": \"b3ffab73-39b9-414e-b66e-78830022c11c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-main\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20211007\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/main --client-urls=https://__name__:4001 --cluster-name=etcd --containerized=true --dns-suffix=.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io --grpc-port=3996 --peer-urls=https://__name__:2380 --quarantine-client-urls=https://__name__:3994 --v=6 --volume-name-tag=k8s.io/etcd/main --volume-provider=aws --volume-tag=k8s.io/etcd/main --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:31Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:31Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.237\",\n                \"podIP\": \"172.20.34.237\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.34.237\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:03:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:03:30Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20211007\",\n                        \"imageID\": \"k8s.gcr.io/etcdadm/etcd-manager@sha256:36a55fd68b835aace1533cb4310f464a7f02d884fd7d4c3f528775e39f6bdb9f\",\n                        \"containerID\": \"containerd://354ad146f5355bfaf475d346df02100b8de9b316b366c62c63629310b5aa6634\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-8gqh5\",\n                \"generateName\": \"kops-controller-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a1b833c4-83cc-4207-b17b-3d04df7a91ea\",\n                \"resourceVersion\": \"460\",\n                \"creationTimestamp\": \"2021-10-11T05:04:25Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"79d87d99fd\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"pod-template-generation\": \"1\",\n                    \"version\": \"v1.22.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kops-controller\",\n                        \"uid\": \"141f6820-8975-46c0-b87d-7473ca57b2c9\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kops-controller-config\",\n                        \"configMap\": {\n                            \"name\": \"kops-controller\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kops-controller-pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kops-controller/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-k95nz\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.2\",\n                        \"command\": [\n                            \"/kops-controller\",\n                            \"--v=2\",\n                            \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-k95nz\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"kops-controller\",\n                \"serviceAccount\": \"kops-controller\",\n                \"nodeName\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-34-237.ap-south-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"node.cloudprovider.kubernetes.io/uninitialized\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:26Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:27Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:27Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:25Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.237\",\n                \"podIP\": \"172.20.34.237\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.34.237\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:04:26Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:04:26Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.2\",\n                        \"imageID\": \"sha256:214fd0c2ba2a45baefe26669daefecb4e346736e3038a64006fc9670b8e48c68\",\n                        \"containerID\": \"containerd://a22357b0ec6a31b3c5a565a934f220af781f1057f326fb07f757b24b6ec402df\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"66e5f967-e301-44bc-9793-6161798370f2\",\n                \"resourceVersion\": \"596\",\n                \"creationTimestamp\": \"2021-10-11T05:05:13Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-apiserver\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/external\": \"api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\",\n                    \"dns.alpha.kubernetes.io/internal\": \"api.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\",\n                    \"kubectl.kubernetes.io/default-container\": \"kube-apiserver\",\n                    \"kubernetes.io/config.hash\": \"951972ef0f3380ccefa22b67d90d7acd\",\n                    \"kubernetes.io/config.mirror\": \"951972ef0f3380ccefa22b67d90d7acd\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:03:01.119063655Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                        \"uid\": \"b3ffab73-39b9-414e-b66e-78830022c11c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-apiserver.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/in-tree-cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubernetesca\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/ca.crt\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkapi\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/kube-apiserver\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvsshproxy\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/sshproxy\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"healthcheck-secrets\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kube-apiserver-healthcheck/secrets\",\n                            \"type\": \"Directory\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-apiserver\"\n                        ],\n                        \"args\": [\n                            \"--allow-privileged=true\",\n                            \"--anonymous-auth=false\",\n                            \"--api-audiences=kubernetes.svc.default\",\n                            \"--apiserver-count=1\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--bind-address=0.0.0.0\",\n                            \"--client-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--cloud-config=/etc/kubernetes/in-tree-cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota\",\n                            \"--etcd-cafile=/srv/kubernetes/kube-apiserver/etcd-ca.crt\",\n                            \"--etcd-certfile=/srv/kubernetes/kube-apiserver/etcd-client.crt\",\n                            \"--etcd-keyfile=/srv/kubernetes/kube-apiserver/etcd-client.key\",\n                            \"--etcd-servers-overrides=/events#https://127.0.0.1:4002\",\n                            \"--etcd-servers=https://127.0.0.1:4001\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/srv/kubernetes/kube-apiserver/kubelet-api.crt\",\n                            \"--kubelet-client-key=/srv/kubernetes/kube-apiserver/kubelet-api.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP\",\n                            \"--proxy-client-cert-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.crt\",\n                            \"--proxy-client-key-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.key\",\n                            \"--requestheader-allowed-names=aggregator\",\n                            \"--requestheader-client-ca-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=443\",\n                            \"--service-account-issuer=https://api.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\",\n                            \"--service-account-jwks-uri=https://api.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/openid/v1/jwks\",\n                            \"--service-account-key-file=/srv/kubernetes/kube-apiserver/service-account.pub\",\n                            \"--service-account-signing-key-file=/srv/kubernetes/kube-apiserver/service-account.key\",\n                            \"--service-cluster-ip-range=100.64.0.0/13\",\n                            \"--storage-backend=etcd3\",\n                            \"--tls-cert-file=/srv/kubernetes/kube-apiserver/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/kube-apiserver/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-apiserver.log\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"https\",\n                                \"hostPort\": 443,\n                                \"containerPort\": 443,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"150m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-apiserver.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/in-tree-cloud.config\"\n                            },\n                            {\n                                \"name\": \"kubernetesca\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/ca.crt\"\n                            },\n                            {\n                                \"name\": \"srvkapi\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/kube-apiserver\"\n                            },\n                            {\n                                \"name\": \"srvsshproxy\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/sshproxy\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 45,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    },\n                    {\n                        \"name\": \"healthcheck\",\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.2\",\n                        \"command\": [\n                            \"/kube-apiserver-healthcheck\"\n                        ],\n                        \"args\": [\n                            \"--ca-cert=/secrets/ca.crt\",\n                            \"--client-cert=/secrets/client.crt\",\n                            \"--client-key=/secrets/client.key\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"healthcheck-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/secrets\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/.kube-apiserver-healthcheck/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:44Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:44Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.237\",\n                \"podIP\": \"172.20.34.237\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.34.237\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:03:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"healthcheck\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:03:23Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.2\",\n                        \"imageID\": \"sha256:2b4dba968e11ecf0f4097f6e6ceb1b8ef75078a7e60c2065c2724b5ac16a9fa0\",\n                        \"containerID\": \"containerd://60b77f546dc92beb77559da86a68a29aba0d9fc0cc183305c4ccf1d32751afbb\",\n                        \"started\": true\n                    },\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:03:43Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-10-11T05:03:21Z\",\n                                \"finishedAt\": \"2021-10-11T05:03:42Z\",\n                                \"containerID\": \"containerd://e44088d4dc6b9d6a8c680bc847f7517846a46a85c7df6a6f0b1b1e9ffa24f778\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.5\",\n                        \"imageID\": \"sha256:7b2ac941d4c3053c5d5b9ccaf9de1eb50c2f086b7f0461ff37d22d26c7ab14e4\",\n                        \"containerID\": \"containerd://34e427d0e0679459f62a407a2ab2ef25af3204e4cfcd70e8411aa065139232f9\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ade8a9a2-94fc-4ae7-97b8-292e0d584c16\",\n                \"resourceVersion\": \"642\",\n                \"creationTimestamp\": \"2021-10-11T05:05:39Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-controller-manager\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"92d8f0527d5807f3c12dacad7d0dce41\",\n                    \"kubernetes.io/config.mirror\": \"92d8f0527d5807f3c12dacad7d0dce41\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:03:01.119065253Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                        \"uid\": \"b3ffab73-39b9-414e-b66e-78830022c11c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-controller-manager.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/in-tree-cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cabundle\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/ca.crt\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlibkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"volplugins\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-controller-manager\"\n                        ],\n                        \"args\": [\n                            \"--allocate-node-cidrs=true\",\n                            \"--attach-detach-reconcile-sync-period=1m0s\",\n                            \"--authentication-kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--authorization-kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--cloud-config=/etc/kubernetes/in-tree-cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--cluster-name=e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\",\n                            \"--cluster-signing-cert-file=/srv/kubernetes/kube-controller-manager/ca.crt\",\n                            \"--cluster-signing-key-file=/srv/kubernetes/kube-controller-manager/ca.key\",\n                            \"--configure-cloud-routes=true\",\n                            \"--flex-volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"--kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--leader-elect=true\",\n                            \"--root-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--service-account-private-key-file=/srv/kubernetes/kube-controller-manager/service-account.key\",\n                            \"--tls-cert-file=/srv/kubernetes/kube-controller-manager/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/kube-controller-manager/server.key\",\n                            \"--use-service-account-credentials=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-controller-manager.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-controller-manager.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/in-tree-cloud.config\"\n                            },\n                            {\n                                \"name\": \"cabundle\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/ca.crt\"\n                            },\n                            {\n                                \"name\": \"srvkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"varlibkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"volplugins\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10257,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:10Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:10Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.237\",\n                \"podIP\": \"172.20.34.237\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.34.237\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:03:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:04:09Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-10-11T05:03:33Z\",\n                                \"finishedAt\": \"2021-10-11T05:03:53Z\",\n                                \"containerID\": \"containerd://4eeae486881e77bbaaaab8c995f79dac43cbb9e43b92bf4bee27ff8b37c0b01c\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 2,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.5\",\n                        \"imageID\": \"sha256:184ef4d127b40e20fe2fee7bc542de289e083a62e929d07346fc6a643c583659\",\n                        \"containerID\": \"containerd://da3bc1a7617fee72630f1c4ada0259c75a6d0ca9acec2e316bf2a1f1333a123b\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-33-34.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"263ffe42-6fb7-4996-949e-785cbcfbf6b0\",\n                \"resourceVersion\": \"840\",\n                \"creationTimestamp\": \"2021-10-11T05:06:25Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"c61d2f87510c22e6fd486b7c3b36ea71\",\n                    \"kubernetes.io/config.mirror\": \"c61d2f87510c22e6fd486b7c3b36ea71\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:05:08.316661397Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-33-34.ap-south-1.compute.internal\",\n                        \"uid\": \"da3dc7f0-20d6-4944-a753-df146d4141a0\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-33-34.ap-south-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-33-34.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:08Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:12Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:12Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:08Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.33.34\",\n                \"podIP\": \"172.20.33.34\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.33.34\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:05:08Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:05:11Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"imageID\": \"sha256:e08abd2be7302b7de4cc9d06bc6045be005c870f93065814da1d761980f7218d\",\n                        \"containerID\": \"containerd://a09a51cd14aaa14fca0f066dbc6f74f91f4287ebbadfebbb9f856a9eaefef428\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"720acad6-c5c7-4a4e-8517-8d596e44c190\",\n                \"resourceVersion\": \"527\",\n                \"creationTimestamp\": \"2021-10-11T05:04:47Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"bf6025a5cd1ddb4585f52d00767b7dbc\",\n                    \"kubernetes.io/config.mirror\": \"bf6025a5cd1ddb4585f52d00767b7dbc\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:03:01.119066597Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                        \"uid\": \"b3ffab73-39b9-414e-b66e-78830022c11c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-34-237.ap-south-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://127.0.0.1\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:23Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:23Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.237\",\n                \"podIP\": \"172.20.34.237\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.34.237\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:03:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:03:21Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"imageID\": \"sha256:e08abd2be7302b7de4cc9d06bc6045be005c870f93065814da1d761980f7218d\",\n                        \"containerID\": \"containerd://d657ee3a2fc6c588df6a6572a33565b503a57c36cf9bf665e380e257ff23f7f2\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-42-144.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cb6ff4cf-af23-44fa-ac48-2a1d48047164\",\n                \"resourceVersion\": \"809\",\n                \"creationTimestamp\": \"2021-10-11T05:06:14Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"2c78924c96dd03df94010fca1114716a\",\n                    \"kubernetes.io/config.mirror\": \"2c78924c96dd03df94010fca1114716a\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:05:00.351697713Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-42-144.ap-south-1.compute.internal\",\n                        \"uid\": \"f1373ab1-2f0f-424f-b4ea-51087af428ec\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-42-144.ap-south-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-42-144.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:00Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:04Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:04Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:00Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.42.144\",\n                \"podIP\": \"172.20.42.144\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.42.144\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:05:00Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:05:03Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"imageID\": \"sha256:e08abd2be7302b7de4cc9d06bc6045be005c870f93065814da1d761980f7218d\",\n                        \"containerID\": \"containerd://ec2c7535bfc6f128067fb2cdf92a4b533877f558465fafa5a39daeb623c805f7\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-43-95.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a9fbfd11-2ed2-474d-8b3f-fc23521ee307\",\n                \"resourceVersion\": \"869\",\n                \"creationTimestamp\": \"2021-10-11T05:06:37Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"2644900af7680b3908d1740b43c7c036\",\n                    \"kubernetes.io/config.mirror\": \"2644900af7680b3908d1740b43c7c036\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:05:07.892068976Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-43-95.ap-south-1.compute.internal\",\n                        \"uid\": \"581a3a3a-5ec9-40a8-983a-1bdd896dbf0e\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-43-95.ap-south-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-43-95.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:08Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:12Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:12Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:05:08Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.43.95\",\n                \"podIP\": \"172.20.43.95\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.43.95\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:05:08Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:05:11Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"imageID\": \"sha256:e08abd2be7302b7de4cc9d06bc6045be005c870f93065814da1d761980f7218d\",\n                        \"containerID\": \"containerd://b4c36a967b0022102651851896372d9bdf605bf5a564e62a7a1a9f9e6c43746a\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-45-252.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b9b3d6fc-319c-4806-94cd-d641bb3412b3\",\n                \"resourceVersion\": \"859\",\n                \"creationTimestamp\": \"2021-10-11T05:06:29Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"463e8182fa542aac3b016d172aa62aca\",\n                    \"kubernetes.io/config.mirror\": \"463e8182fa542aac3b016d172aa62aca\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:04:54.714053879Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-45-252.ap-south-1.compute.internal\",\n                        \"uid\": \"03f1539f-a06d-4014-9b60-99693502d14c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-45-252.ap-south-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-45-252.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:55Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:58Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:58Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:04:55Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.45.252\",\n                \"podIP\": \"172.20.45.252\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.45.252\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:04:55Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:04:58Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\",\n                        \"imageID\": \"sha256:e08abd2be7302b7de4cc9d06bc6045be005c870f93065814da1d761980f7218d\",\n                        \"containerID\": \"containerd://ba1eb3c1e4c3c395d4ce1924164befcdbfaf57e8a81cca6d023ece908bbf43ba\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9a9df5db-d9d3-4e02-8ade-81f0f952d4f3\",\n                \"resourceVersion\": \"528\",\n                \"creationTimestamp\": \"2021-10-11T05:04:44Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-scheduler\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"a71e75371d6b74e1efc358d2cc996735\",\n                    \"kubernetes.io/config.mirror\": \"a71e75371d6b74e1efc358d2cc996735\",\n                    \"kubernetes.io/config.seen\": \"2021-10-11T05:03:01.119068045Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                        \"uid\": \"b3ffab73-39b9-414e-b66e-78830022c11c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"varlibkubescheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvscheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-scheduler.log\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-scheduler\"\n                        ],\n                        \"args\": [\n                            \"--authentication-kubeconfig=/var/lib/kube-scheduler/kubeconfig\",\n                            \"--authorization-kubeconfig=/var/lib/kube-scheduler/kubeconfig\",\n                            \"--config=/var/lib/kube-scheduler/config.yaml\",\n                            \"--leader-elect=true\",\n                            \"--tls-cert-file=/srv/kubernetes/kube-scheduler/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/kube-scheduler/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-scheduler.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"varlibkubescheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"srvscheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-scheduler.log\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10251,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-34-237.ap-south-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:23Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:23Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-10-11T05:03:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.237\",\n                \"podIP\": \"172.20.34.237\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.34.237\"\n                    }\n                ],\n                \"startTime\": \"2021-10-11T05:03:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-10-11T05:03:21Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.5\",\n                        \"imageID\": \"sha256:8e60ea3644d6dd6c1ed564f36ab9ef7bf19bfde909a5dcc4d974ac21ea1b3b3e\",\n                        \"containerID\": \"containerd://8e8060891658ffc9388798a9ec80a7684f4bf71e0c0da3e9d3622a25e0ec32fe\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container coredns of pod kube-system/coredns-5dc785954d-5cr6n ====\n[INFO] plugin/ready: Still waiting on: \"kubernetes\"\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 9e3e34ac93d9bb69126337d32f1195e3\nCoreDNS-1.8.4\nlinux/amd64, go1.16.4, 053c4d5\n==== END logs for container coredns of pod kube-system/coredns-5dc785954d-5cr6n ====\n==== START logs for container coredns of pod kube-system/coredns-5dc785954d-dv6ss ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 9e3e34ac93d9bb69126337d32f1195e3\nCoreDNS-1.8.4\nlinux/amd64, go1.16.4, 053c4d5\n==== END logs for container coredns of pod kube-system/coredns-5dc785954d-dv6ss ====\n==== START logs for container autoscaler of pod kube-system/coredns-autoscaler-84d4cfd89c-vwtvb ====\nI1011 05:06:10.573246       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns\nI1011 05:06:10.827272       1 autoscaler_server.go:157] ConfigMap not found: configmaps \"coredns-autoscaler\" not found, will create one with default params\nI1011 05:06:10.831036       1 k8sclient.go:147] Created ConfigMap coredns-autoscaler in namespace kube-system\nI1011 05:06:10.831060       1 plugin.go:50] Set control mode to linear\nI1011 05:06:10.831066       1 linear_controller.go:60] ConfigMap version change (old:  new: 746) - rebuilding params\nI1011 05:06:10.831071       1 linear_controller.go:61] Params from apiserver: \n{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true}\nI1011 05:06:10.831122       1 linear_controller.go:80] Defaulting min replicas count to 1 for linear controller\nI1011 05:06:10.832916       1 k8sclient.go:272] Cluster status: SchedulableNodes[5], SchedulableCores[10]\nI1011 05:06:10.832932       1 k8sclient.go:273] Replicas are not as expected : updating replicas from 1 to 2\n==== END logs for container autoscaler of pod kube-system/coredns-autoscaler-84d4cfd89c-vwtvb ====\n==== START logs for container dns-controller of pod kube-system/dns-controller-848dc45d58-zmml2 ====\ndns-controller version 0.1\nI1011 05:04:34.216509       1 main.go:199] initializing the watch controllers, namespace: \"\"\nI1011 05:04:34.216539       1 main.go:223] Ingress controller disabled\nI1011 05:04:34.216561       1 dnscontroller.go:108] starting DNS controller\nI1011 05:04:34.217031       1 node.go:60] starting node controller\nI1011 05:04:34.217467       1 service.go:60] starting service controller\nI1011 05:04:34.217486       1 dnscontroller.go:170] scope not yet ready: node\nI1011 05:04:34.217704       1 pod.go:60] starting pod controller\nI1011 05:04:34.234258       1 dnscontroller.go:625] Update desired state: node/ip-172-20-34-237.ap-south-1.compute.internal: [{A node/ip-172-20-34-237.ap-south-1.compute.internal/internal 172.20.34.237 true} {A node/ip-172-20-34-237.ap-south-1.compute.internal/external 13.233.106.39 true} {A node/role=master/internal 172.20.34.237 true} {A node/role=master/external 13.233.106.39 true} {A node/role=master/ ip-172-20-34-237.ap-south-1.compute.internal true} {A node/role=master/ ip-172-20-34-237.ap-south-1.compute.internal true} {A node/role=master/ ec2-13-233-106-39.ap-south-1.compute.amazonaws.com true}]\nI1011 05:04:34.239247       1 dnscontroller.go:625] Update desired state: pod/kube-system/kops-controller-8gqh5: [{A kops-controller.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io. 172.20.34.237 false}]\nI1011 05:04:39.218005       1 dnscache.go:74] querying all DNS zones (no cached results)\nI1011 05:04:40.196326       1 dnscontroller.go:274] Using default TTL of 1m0s\nI1011 05:04:40.196352       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI1011 05:04:42.826529       1 dnscontroller.go:585] Adding DNS changes to batch {A kops-controller.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io.} [172.20.34.237]\nI1011 05:04:42.826561       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI1011 05:05:13.549231       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal: [{_alias api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io. node/ip-172-20-34-237.ap-south-1.compute.internal/external false}]\nI1011 05:05:18.175976       1 dnscontroller.go:274] Using default TTL of 1m0s\nI1011 05:05:18.176003       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI1011 05:05:21.559718       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-34-237.ap-south-1.compute.internal: [{_alias api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io. node/ip-172-20-34-237.ap-south-1.compute.internal/external false} {A api.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io. 172.20.34.237 false}]\nI1011 05:05:21.778776       1 dnscontroller.go:585] Adding DNS changes to batch {A api.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io.} [13.233.106.39]\nI1011 05:05:21.778807       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI1011 05:05:27.163039       1 dnscontroller.go:274] Using default TTL of 1m0s\nI1011 05:05:27.163069       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI1011 05:05:29.925379       1 dnscontroller.go:585] Adding DNS changes to batch {A api.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io.} [172.20.34.237]\nI1011 05:05:29.925411       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI1011 05:05:55.487156       1 dnscontroller.go:625] Update desired state: node/ip-172-20-45-252.ap-south-1.compute.internal: [{A node/ip-172-20-45-252.ap-south-1.compute.internal/internal 172.20.45.252 true} {A node/ip-172-20-45-252.ap-south-1.compute.internal/external 13.126.42.206 true} {A node/role=node/internal 172.20.45.252 true} {A node/role=node/external 13.126.42.206 true} {A node/role=node/ ip-172-20-45-252.ap-south-1.compute.internal true} {A node/role=node/ ip-172-20-45-252.ap-south-1.compute.internal true} {A node/role=node/ ec2-13-126-42-206.ap-south-1.compute.amazonaws.com true}]\nI1011 05:06:01.095478       1 dnscontroller.go:625] Update desired state: node/ip-172-20-42-144.ap-south-1.compute.internal: [{A node/ip-172-20-42-144.ap-south-1.compute.internal/internal 172.20.42.144 true} {A node/ip-172-20-42-144.ap-south-1.compute.internal/external 3.109.62.133 true} {A node/role=node/internal 172.20.42.144 true} {A node/role=node/external 3.109.62.133 true} {A node/role=node/ ip-172-20-42-144.ap-south-1.compute.internal true} {A node/role=node/ ip-172-20-42-144.ap-south-1.compute.internal true} {A node/role=node/ ec2-3-109-62-133.ap-south-1.compute.amazonaws.com true}]\nI1011 05:06:08.671000       1 dnscontroller.go:625] Update desired state: node/ip-172-20-43-95.ap-south-1.compute.internal: [{A node/ip-172-20-43-95.ap-south-1.compute.internal/internal 172.20.43.95 true} {A node/ip-172-20-43-95.ap-south-1.compute.internal/external 3.108.218.97 true} {A node/role=node/internal 172.20.43.95 true} {A node/role=node/external 3.108.218.97 true} {A node/role=node/ ip-172-20-43-95.ap-south-1.compute.internal true} {A node/role=node/ ip-172-20-43-95.ap-south-1.compute.internal true} {A node/role=node/ ec2-3-108-218-97.ap-south-1.compute.amazonaws.com true}]\nI1011 05:06:09.085827       1 dnscontroller.go:625] Update desired state: node/ip-172-20-33-34.ap-south-1.compute.internal: [{A node/ip-172-20-33-34.ap-south-1.compute.internal/internal 172.20.33.34 true} {A node/ip-172-20-33-34.ap-south-1.compute.internal/external 13.235.245.56 true} {A node/role=node/internal 172.20.33.34 true} {A node/role=node/external 13.235.245.56 true} {A node/role=node/ ip-172-20-33-34.ap-south-1.compute.internal true} {A node/role=node/ ip-172-20-33-34.ap-south-1.compute.internal true} {A node/role=node/ ec2-13-235-245-56.ap-south-1.compute.amazonaws.com true}]\n==== END logs for container dns-controller of pod kube-system/dns-controller-848dc45d58-zmml2 ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-34-237.ap-south-1.compute.internal ====\netcd-manager\nI1011 05:03:30.793809   20215 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI1011 05:03:30.794700   20215 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI1011 05:03:30.795370   20215 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI1011 05:03:30.795911   20215 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI1011 05:03:30.796721   20215 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI1011 05:03:30.797277   20215 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/events k8s.io/role/master=1 kubernetes.io/cluster/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/events\nI1011 05:03:30.799460   20215 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI1011 05:03:30.919733   20215 mounter.go:304] Trying to mount master volume: \"vol-0916f2a071e4c09ad\"\nI1011 05:03:30.919749   20215 volumes.go:331] Trying to attach volume \"vol-0916f2a071e4c09ad\" at \"/dev/xvdu\"\nI1011 05:03:30.919837   20215 volumes.go:86] AWS API Request: ec2/AttachVolume\nI1011 05:03:31.261870   20215 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-10-11 05:03:31.25 +0000 UTC,\n  Device: \"/dev/xvdu\",\n  InstanceId: \"i-0c28686e78faa9b57\",\n  State: \"attaching\",\n  VolumeId: \"vol-0916f2a071e4c09ad\"\n}\nI1011 05:03:31.262034   20215 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI1011 05:03:31.346316   20215 mounter.go:318] Currently attached volumes: [0xc0000d6000]\nI1011 05:03:31.346334   20215 mounter.go:72] Master volume \"vol-0916f2a071e4c09ad\" is attached at \"/dev/xvdu\"\nI1011 05:03:31.346434   20215 mounter.go:86] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-0916f2a071e4c09ad\nI1011 05:03:31.346498   20215 volumes.go:234] volume vol-0916f2a071e4c09ad not mounted at /rootfs/dev/xvdu\nI1011 05:03:31.346514   20215 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0916f2a071e4c09ad\"\nI1011 05:03:31.346519   20215 volumes.go:251] volume vol-0916f2a071e4c09ad not mounted at nvme-Amazon_Elastic_Block_Store_vol0916f2a071e4c09ad\nI1011 05:03:31.346523   20215 mounter.go:121] Waiting for volume \"vol-0916f2a071e4c09ad\" to be mounted\nI1011 05:03:32.346609   20215 volumes.go:234] volume vol-0916f2a071e4c09ad not mounted at /rootfs/dev/xvdu\nI1011 05:03:32.346799   20215 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol0916f2a071e4c09ad\" at \"/dev/nvme1n1\"\nI1011 05:03:32.346815   20215 mounter.go:125] Found volume \"vol-0916f2a071e4c09ad\" mounted at device \"/dev/nvme1n1\"\nI1011 05:03:32.347942   20215 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-0916f2a071e4c09ad\"\nI1011 05:03:32.348022   20215 mounter.go:176] Mounting device \"/dev/nvme1n1\" on \"/mnt/master-vol-0916f2a071e4c09ad\"\nI1011 05:03:32.348034   20215 mount_linux.go:463] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI1011 05:03:32.348051   20215 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI1011 05:03:32.366428   20215 mount_linux.go:466] Output: \"\"\nI1011 05:03:32.366450   20215 mount_linux.go:425] Disk \"/dev/nvme1n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme1n1]\nI1011 05:03:32.366470   20215 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme1n1]\nI1011 05:03:32.605355   20215 mount_linux.go:435] Disk successfully formatted (mkfs): ext4 - /dev/nvme1n1 /mnt/master-vol-0916f2a071e4c09ad\nI1011 05:03:32.605371   20215 mount_linux.go:453] Attempting to mount disk /dev/nvme1n1 in ext4 format at /mnt/master-vol-0916f2a071e4c09ad\nI1011 05:03:32.605385   20215 nsenter.go:80] nsenter mount /dev/nvme1n1 /mnt/master-vol-0916f2a071e4c09ad ext4 [defaults]\nI1011 05:03:32.605408   20215 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-0916f2a071e4c09ad --scope -- /bin/mount -t ext4 -o defaults /dev/nvme1n1 /mnt/master-vol-0916f2a071e4c09ad]\nI1011 05:03:32.683734   20215 nsenter.go:84] Output of mounting /dev/nvme1n1 to /mnt/master-vol-0916f2a071e4c09ad: Running scope as unit: run-r8526a523dd3846189c26558fc60019c7.scope\nI1011 05:03:32.683756   20215 mount_linux.go:463] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI1011 05:03:32.683781   20215 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI1011 05:03:32.705489   20215 mount_linux.go:466] Output: \"DEVNAME=/dev/nvme1n1\\nTYPE=ext4\\n\"\nI1011 05:03:32.705515   20215 resizefs_linux.go:55] ResizeFS.Resize - Expanding mounted volume /dev/nvme1n1\nI1011 05:03:32.705526   20215 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme1n1]\nI1011 05:03:32.709789   20215 resizefs_linux.go:70] Device /dev/nvme1n1 resized successfully\nI1011 05:03:32.725889   20215 mount_linux.go:211] Detected OS with systemd\nI1011 05:03:32.728061   20215 mounter.go:224] mounting inside container: /rootfs/dev/nvme1n1 -> /rootfs/mnt/master-vol-0916f2a071e4c09ad\nI1011 05:03:32.728081   20215 mount_linux.go:180] Mounting cmd (systemd-run) with arguments (--description=Kubernetes transient mount for /rootfs/mnt/master-vol-0916f2a071e4c09ad --scope -- mount  /rootfs/dev/nvme1n1 /rootfs/mnt/master-vol-0916f2a071e4c09ad)\nI1011 05:03:32.767376   20215 mounter.go:94] mounted master volume \"vol-0916f2a071e4c09ad\" on /mnt/master-vol-0916f2a071e4c09ad\nI1011 05:03:32.767403   20215 main.go:320] discovered IP address: 172.20.34.237\nI1011 05:03:32.767409   20215 main.go:325] Setting data dir to /rootfs/mnt/master-vol-0916f2a071e4c09ad\nI1011 05:03:32.891298   20215 certs.go:211] generating certificate for \"etcd-manager-server-etcd-events-a\"\nI1011 05:03:33.100945   20215 certs.go:211] generating certificate for \"etcd-manager-client-etcd-events-a\"\nI1011 05:03:33.105171   20215 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-events-a\"\nI1011 05:03:33.105997   20215 main.go:473] peerClientIPs: [172.20.34.237]\nI1011 05:03:33.182378   20215 certs.go:211] generating certificate for \"etcd-manager-etcd-events-a\"\nI1011 05:03:33.184190   20215 server.go:105] GRPC server listening on \"172.20.34.237:3997\"\nI1011 05:03:33.184953   20215 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI1011 05:03:33.301246   20215 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI1011 05:03:33.367196   20215 peers.go:115] found new candidate peer from discovery: etcd-events-a [{172.20.34.237 0} {172.20.34.237 0}]\nI1011 05:03:33.367231   20215 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:03:33.367310   20215 peers.go:295] connecting to peer \"etcd-events-a\" with TLS policy, servername=\"etcd-manager-server-etcd-events-a\"\nI1011 05:03:35.185650   20215 controller.go:187] starting controller iteration\nI1011 05:03:35.186045   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:03:35.186215   20215 commands.go:41] refreshing commands\nI1011 05:03:35.186325   20215 s3context.go:340] product_uuid is \"ec2bc99a-493a-2368-35b3-ccee9a048468\", assuming running on EC2\nI1011 05:03:35.187577   20215 s3context.go:169] got region from metadata: \"ap-south-1\"\nI1011 05:03:35.220422   20215 s3context.go:216] found bucket in region \"us-west-1\"\nI1011 05:03:36.237283   20215 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI1011 05:03:36.237303   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI1011 05:03:46.484754   20215 controller.go:187] starting controller iteration\nI1011 05:03:46.484779   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:03:46.485048   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:03:46.485225   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:03:46.485513   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > }\nI1011 05:03:46.485563   20215 controller.go:301] etcd cluster members: map[]\nI1011 05:03:46.485573   20215 controller.go:639] sending member map to all peers: \nI1011 05:03:46.485804   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:03:46.485815   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:03:47.415969   20215 controller.go:357] detected that there is no existing cluster\nI1011 05:03:47.415984   20215 commands.go:41] refreshing commands\nI1011 05:03:47.725645   20215 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI1011 05:03:47.725680   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI1011 05:03:47.966712   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:03:47.966988   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:03:47.967008   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:03:47.967066   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:03:47.967250   20215 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > }]\nI1011 05:03:47.967615   20215 newcluster.go:153] JoinClusterResponse: \nI1011 05:03:47.968716   20215 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI1011 05:03:47.968759   20215 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA\nI1011 05:03:47.969619   20215 pki.go:58] adding peerClientIPs [172.20.34.237]\nI1011 05:03:47.969642   20215 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io] IPs:[172.20.34.237 127.0.0.1]} Usages:[2 1]}\nI1011 05:03:48.283701   20215 certs.go:211] generating certificate for \"etcd-events-a\"\nI1011 05:03:48.286039   20215 pki.go:108] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI1011 05:03:48.405144   20215 certs.go:211] generating certificate for \"etcd-events-a\"\nI1011 05:03:48.453149   20215 certs.go:211] generating certificate for \"etcd-events-a\"\nI1011 05:03:48.456157   20215 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI1011 05:03:48.457673   20215 newcluster.go:171] JoinClusterResponse: \nI1011 05:03:48.457722   20215 s3fs.go:257] Writing file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI1011 05:03:48.457734   20215 s3context.go:244] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-10-11 05:03:48.463822 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\n2021-10-11 05:03:48.463865 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/server.crt\n2021-10-11 05:03:48.463872 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-10-11 05:03:48.463884 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA\n2021-10-11 05:03:48.463898 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-10-11 05:03:48.463919 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\n2021-10-11 05:03:48.463927 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\n2021-10-11 05:03:48.463932 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-10-11 05:03:48.463942 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=_eCdnzjj0CiDN6VvK_c8JA\n2021-10-11 05:03:48.463951 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/server.key\n2021-10-11 05:03:48.463965 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3995\n2021-10-11 05:03:48.463976 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-10-11 05:03:48.463986 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-10-11 05:03:48.463997 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-10-11 05:03:48.464009 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-10-11 05:03:48.464019 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/me.crt\n2021-10-11 05:03:48.464027 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-10-11 05:03:48.464039 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/me.key\n2021-10-11 05:03:48.464047 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/ca.crt\n2021-10-11 05:03:48.464061 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/ca.crt\n2021-10-11 05:03:48.464074 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.464Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.464Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/me.crt, key = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.464Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3995\"]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.464Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-events-a=https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"_eCdnzjj0CiDN6VvK_c8JA\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.468Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA/member/snap/db\",\"took\":\"2.553303ms\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.468Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.34.237:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.468Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.34.237:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.473Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"a4e36be56781bc4f\",\"cluster-id\":\"17349b3bf894373c\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.473Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"a4e36be56781bc4f switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.473Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"a4e36be56781bc4f became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.473Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft a4e36be56781bc4f [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.473Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"a4e36be56781bc4f became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.473Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"a4e36be56781bc4f switched to configuration voters=(11881458874961738831)\"}\n{\"level\":\"warn\",\"ts\":\"2021-10-11T05:03:48.476Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.480Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.481Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"a4e36be56781bc4f\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.482Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"a4e36be56781bc4f\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.482Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"a4e36be56781bc4f switched to configuration voters=(11881458874961738831)\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.482Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"17349b3bf894373c\",\"local-member-id\":\"a4e36be56781bc4f\",\"added-peer-id\":\"a4e36be56781bc4f\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.483Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/server.crt, key = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.483Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.483Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"a4e36be56781bc4f\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.773Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"a4e36be56781bc4f is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.773Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"a4e36be56781bc4f became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.773Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"a4e36be56781bc4f received MsgVoteResp from a4e36be56781bc4f at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.773Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"a4e36be56781bc4f became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.773Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: a4e36be56781bc4f elected leader a4e36be56781bc4f at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.774Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"a4e36be56781bc4f\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995]}\",\"request-path\":\"/0/members/a4e36be56781bc4f/attributes\",\"cluster-id\":\"17349b3bf894373c\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.774Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.774Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"17349b3bf894373c\",\"local-member-id\":\"a4e36be56781bc4f\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.775Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.775Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:48.775Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3995\"}\nI1011 05:03:48.961775   20215 s3fs.go:257] Writing file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:03:49.227040   20215 controller.go:187] starting controller iteration\nI1011 05:03:49.227060   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:03:49.227363   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:03:49.227502   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:03:49.228045   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995]\nI1011 05:03:49.240432   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI1011 05:03:49.240529   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:03:49.240545   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:03:49.240741   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:03:49.240757   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:03:49.240802   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:03:49.240865   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:03:49.240879   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:03:49.483552   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:03:49.484741   20215 backup.go:128] performing snapshot save to /tmp/3274778227/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:49.491Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:211\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:49.491Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:49.492Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:49.492Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:49.493Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:219\",\"msg\":\"completed snapshot read; closing\"}\nI1011 05:03:49.493492   20215 s3fs.go:257] Writing file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/2021-10-11T05:03:49Z-000001/etcd.backup.gz\"\nI1011 05:03:49.744390   20215 s3fs.go:257] Writing file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/2021-10-11T05:03:49Z-000001/_etcd_backup.meta\"\nI1011 05:03:50.000601   20215 backup.go:153] backup complete: name:\"2021-10-11T05:03:49Z-000001\" \nI1011 05:03:50.001050   20215 controller.go:935] backup response: name:\"2021-10-11T05:03:49Z-000001\" \nI1011 05:03:50.001063   20215 controller.go:574] took backup: name:\"2021-10-11T05:03:49Z-000001\" \nI1011 05:03:50.251007   20215 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events: [2021-10-11T05:03:49Z-000001]\nI1011 05:03:50.251030   20215 cleanup.go:166] retaining backup \"2021-10-11T05:03:49Z-000001\"\nI1011 05:03:50.251055   20215 restore.go:98] Setting quarantined state to false\nI1011 05:03:50.251319   20215 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" cluster_name:\"etcd-events\" > \nI1011 05:03:50.251371   20215 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" cluster_name:\"etcd-events\" > \nI1011 05:03:50.251382   20215 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA\nI1011 05:03:50.251421   20215 etcdprocess.go:131] Waiting for etcd to exit\nI1011 05:03:50.351690   20215 etcdprocess.go:131] Waiting for etcd to exit\nI1011 05:03:50.351710   20215 etcdprocess.go:136] Exited etcd: signal: killed\nI1011 05:03:50.351777   20215 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI1011 05:03:50.351930   20215 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI1011 05:03:50.351944   20215 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI1011 05:03:50.351976   20215 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA\nI1011 05:03:50.352142   20215 pki.go:58] adding peerClientIPs [172.20.34.237]\nI1011 05:03:50.352165   20215 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io] IPs:[172.20.34.237 127.0.0.1]} Usages:[2 1]}\nI1011 05:03:50.352345   20215 certs.go:151] existing certificate not valid after 2023-10-11T05:03:48Z; will regenerate\nI1011 05:03:50.352355   20215 certs.go:211] generating certificate for \"etcd-events-a\"\nI1011 05:03:50.354556   20215 pki.go:108] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI1011 05:03:50.354723   20215 certs.go:151] existing certificate not valid after 2023-10-11T05:03:48Z; will regenerate\nI1011 05:03:50.354733   20215 certs.go:211] generating certificate for \"etcd-events-a\"\nI1011 05:03:50.461146   20215 certs.go:211] generating certificate for \"etcd-events-a\"\nI1011 05:03:50.462963   20215 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI1011 05:03:50.464427   20215 restore.go:116] ReconfigureResponse: \nI1011 05:03:50.465590   20215 controller.go:187] starting controller iteration\nI1011 05:03:50.465608   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:03:50.465857   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:03:50.465972   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:03:50.466331   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\n2021-10-11 05:03:50.469520 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\n2021-10-11 05:03:50.469550 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/server.crt\n2021-10-11 05:03:50.469561 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-10-11 05:03:50.469573 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA\n2021-10-11 05:03:50.469584 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-10-11 05:03:50.469607 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\n2021-10-11 05:03:50.469613 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\n2021-10-11 05:03:50.469621 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-10-11 05:03:50.469634 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=_eCdnzjj0CiDN6VvK_c8JA\n2021-10-11 05:03:50.469642 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/server.key\n2021-10-11 05:03:50.469651 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4002\n2021-10-11 05:03:50.469675 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-10-11 05:03:50.469682 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-10-11 05:03:50.469690 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-10-11 05:03:50.469703 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-10-11 05:03:50.469711 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/me.crt\n2021-10-11 05:03:50.469721 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-10-11 05:03:50.469727 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/me.key\n2021-10-11 05:03:50.469732 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/ca.crt\n2021-10-11 05:03:50.469747 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/ca.crt\n2021-10-11 05:03:50.469761 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.469Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.469Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.469Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/me.crt, key = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.470Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4002\"]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.470Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.470Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0916f2a071e4c09ad/data/_eCdnzjj0CiDN6VvK_c8JA/member/snap/db\",\"took\":\"110.559µs\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.471Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"17349b3bf894373c\",\"local-member-id\":\"a4e36be56781bc4f\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.471Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"a4e36be56781bc4f switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.471Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"a4e36be56781bc4f became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.471Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft a4e36be56781bc4f [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-10-11T05:03:50.472Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.474Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.475Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"a4e36be56781bc4f\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.475Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.476Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"a4e36be56781bc4f switched to configuration voters=(11881458874961738831)\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.476Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"17349b3bf894373c\",\"local-member-id\":\"a4e36be56781bc4f\",\"added-peer-id\":\"a4e36be56781bc4f\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.476Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"17349b3bf894373c\",\"local-member-id\":\"a4e36be56781bc4f\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.476Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.478Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/server.crt, key = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0916f2a071e4c09ad/pki/_eCdnzjj0CiDN6VvK_c8JA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.478Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"a4e36be56781bc4f\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:50.478Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:51.772Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"a4e36be56781bc4f is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:51.772Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"a4e36be56781bc4f became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:51.772Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"a4e36be56781bc4f received MsgVoteResp from a4e36be56781bc4f at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:51.772Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"a4e36be56781bc4f became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:51.772Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: a4e36be56781bc4f elected leader a4e36be56781bc4f at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:51.772Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"a4e36be56781bc4f\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]}\",\"request-path\":\"/0/members/a4e36be56781bc4f/attributes\",\"cluster-id\":\"17349b3bf894373c\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-10-11T05:03:51.773Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4002\"}\nI1011 05:03:51.789035   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:03:51.789118   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:03:51.789135   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:03:51.789317   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:03:51.789335   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:03:51.789377   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:03:51.789443   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:03:51.789457   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:03:52.029695   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:03:52.029754   20215 controller.go:555] controller loop complete\nI1011 05:04:02.034873   20215 controller.go:187] starting controller iteration\nI1011 05:04:02.034913   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:02.035191   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:04:02.035353   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:02.036031   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:04:02.062687   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:04:02.062790   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:04:02.062806   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:04:02.063156   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:02.063173   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:02.063235   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:04:02.063315   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:04:02.063330   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:04:02.993342   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:04:02.993411   20215 controller.go:555] controller loop complete\nI1011 05:04:12.994770   20215 controller.go:187] starting controller iteration\nI1011 05:04:12.994795   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:12.995067   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:04:12.995190   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:12.995560   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:04:13.006965   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:04:13.007046   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:04:13.007061   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:04:13.007256   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:13.007270   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:13.007322   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:04:13.007395   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:04:13.007406   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:04:13.957882   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:04:13.957962   20215 controller.go:555] controller loop complete\nI1011 05:04:23.959055   20215 controller.go:187] starting controller iteration\nI1011 05:04:23.959086   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:23.959357   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:04:23.959499   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:23.959900   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:04:23.972227   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:04:23.972312   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:04:23.972330   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:04:23.972649   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:23.972689   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:23.972741   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:04:23.972862   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:04:23.972881   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:04:24.905549   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:04:24.905619   20215 controller.go:555] controller loop complete\nI1011 05:04:33.370810   20215 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI1011 05:04:33.425345   20215 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI1011 05:04:33.465558   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:33.465637   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:04:34.907142   20215 controller.go:187] starting controller iteration\nI1011 05:04:34.907164   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:34.907493   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:04:34.907641   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:34.908406   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:04:34.920316   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:04:34.920381   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:04:34.920395   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:04:34.920566   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:34.920580   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:34.920627   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:04:34.920718   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:04:34.920734   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:04:35.860812   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:04:35.860886   20215 controller.go:555] controller loop complete\nI1011 05:04:45.862209   20215 controller.go:187] starting controller iteration\nI1011 05:04:45.862235   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:45.862502   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:04:45.862635   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:45.863274   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:04:45.877336   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:04:45.877397   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:04:45.877411   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:04:45.877579   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:45.877594   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:45.877637   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:04:45.877736   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:04:45.877749   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:04:46.836011   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:04:46.836085   20215 controller.go:555] controller loop complete\nI1011 05:04:56.837889   20215 controller.go:187] starting controller iteration\nI1011 05:04:56.837912   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:56.838160   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:04:56.838294   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:04:56.838618   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:04:56.850170   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:04:56.850237   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:04:56.850253   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:04:56.850417   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:56.850433   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:04:56.850480   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:04:56.850536   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:04:56.850551   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:04:57.802145   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:04:57.802218   20215 controller.go:555] controller loop complete\nI1011 05:05:07.804773   20215 controller.go:187] starting controller iteration\nI1011 05:05:07.804801   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:07.805085   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:05:07.805210   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:07.805731   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:05:07.817287   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:05:07.817360   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:05:07.817377   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:05:07.817563   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:07.817579   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:07.817630   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:05:07.817719   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:05:07.817734   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:05:08.774179   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:05:08.774254   20215 controller.go:555] controller loop complete\nI1011 05:05:18.776240   20215 controller.go:187] starting controller iteration\nI1011 05:05:18.776265   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:18.776587   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:05:18.776750   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:18.777746   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:05:18.790707   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:05:18.790777   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:05:18.790791   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:05:18.791006   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:18.791021   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:18.791071   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:05:18.791269   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:05:18.791284   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:05:19.737569   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:05:19.737643   20215 controller.go:555] controller loop complete\nI1011 05:05:29.738828   20215 controller.go:187] starting controller iteration\nI1011 05:05:29.738852   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:29.739132   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:05:29.739267   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:29.739809   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:05:29.751285   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:05:29.751347   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:05:29.751360   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:05:29.751539   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:29.751554   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:29.751605   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:05:29.751704   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:05:29.751719   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:05:30.700044   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:05:30.700118   20215 controller.go:555] controller loop complete\nI1011 05:05:33.466532   20215 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI1011 05:05:33.586184   20215 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI1011 05:05:33.620508   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:33.620582   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:05:40.701297   20215 controller.go:187] starting controller iteration\nI1011 05:05:40.701325   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:40.701630   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:05:40.701803   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:40.702367   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:05:40.713895   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:05:40.713983   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:05:40.714000   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:05:40.714153   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:40.714169   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:40.714220   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:05:40.714300   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:05:40.714317   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:05:41.660396   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:05:41.660472   20215 controller.go:555] controller loop complete\nI1011 05:05:51.661775   20215 controller.go:187] starting controller iteration\nI1011 05:05:51.661801   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:51.662091   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:05:51.662219   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:05:51.662841   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:05:51.675977   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:05:51.676043   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:05:51.676057   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:05:51.676262   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:51.676276   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:05:51.676329   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:05:51.676417   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:05:51.676430   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:05:52.628154   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:05:52.628229   20215 controller.go:555] controller loop complete\nI1011 05:06:02.630777   20215 controller.go:187] starting controller iteration\nI1011 05:06:02.630801   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:02.631071   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:06:02.631272   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:02.631800   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:06:02.643123   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:06:02.643195   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:06:02.643212   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:06:02.643372   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:02.643388   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:02.643432   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:06:02.643493   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:06:02.643507   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:06:03.579735   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:06:03.579891   20215 controller.go:555] controller loop complete\nI1011 05:06:13.581626   20215 controller.go:187] starting controller iteration\nI1011 05:06:13.581649   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:13.581956   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:06:13.582086   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:13.582552   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:06:13.596972   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:06:13.597036   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:06:13.597052   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:06:13.597228   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:13.597244   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:13.597297   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:06:13.597368   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:06:13.597382   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:06:14.526208   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:06:14.526280   20215 controller.go:555] controller loop complete\nI1011 05:06:24.528043   20215 controller.go:187] starting controller iteration\nI1011 05:06:24.528068   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:24.528359   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:06:24.528502   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:24.528876   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:06:24.541278   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:06:24.541356   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:06:24.541373   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:06:24.541549   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:24.541564   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:24.541611   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:06:24.541701   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:06:24.541715   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:06:25.488969   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:06:25.489043   20215 controller.go:555] controller loop complete\nI1011 05:06:33.621027   20215 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI1011 05:06:33.737253   20215 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI1011 05:06:33.779002   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:33.779083   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:06:35.490765   20215 controller.go:187] starting controller iteration\nI1011 05:06:35.490790   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:35.491109   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:06:35.491257   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:35.491875   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:06:35.506947   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:06:35.507022   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:06:35.507036   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:06:35.507239   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:35.507254   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:35.507306   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:06:35.507380   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:06:35.507392   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:06:36.451643   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:06:36.451729   20215 controller.go:555] controller loop complete\nI1011 05:06:46.453579   20215 controller.go:187] starting controller iteration\nI1011 05:06:46.453603   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:46.453886   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:06:46.454030   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:46.454334   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:06:46.465632   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:06:46.465724   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:06:46.465739   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:06:46.465939   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:46.465955   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:46.466004   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:06:46.466086   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:06:46.466098   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:06:47.395812   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:06:47.395878   20215 controller.go:555] controller loop complete\nI1011 05:06:57.397773   20215 controller.go:187] starting controller iteration\nI1011 05:06:57.397798   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:57.398084   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:06:57.398222   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:06:57.398817   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:06:57.411020   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:06:57.411089   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:06:57.411106   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:06:57.411294   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:57.411317   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:06:57.411373   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:06:57.412042   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:06:57.412062   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:06:58.358298   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:06:58.358370   20215 controller.go:555] controller loop complete\nI1011 05:07:08.359825   20215 controller.go:187] starting controller iteration\nI1011 05:07:08.359853   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:08.360097   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:07:08.360236   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:08.360595   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:07:08.372193   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:07:08.372284   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:07:08.372301   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:07:08.372471   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:08.372485   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:08.372529   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:07:08.372583   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:07:08.372592   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:07:09.321740   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:07:09.321814   20215 controller.go:555] controller loop complete\nI1011 05:07:19.323366   20215 controller.go:187] starting controller iteration\nI1011 05:07:19.323391   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:19.323725   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:07:19.323866   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:19.324412   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:07:19.338731   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:07:19.338798   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:07:19.338813   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:07:19.339013   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:19.339028   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:19.339078   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:07:19.339161   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:07:19.339184   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:07:20.288363   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:07:20.288434   20215 controller.go:555] controller loop complete\nI1011 05:07:30.290008   20215 controller.go:187] starting controller iteration\nI1011 05:07:30.290032   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:30.290283   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:07:30.290426   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:30.290753   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:07:30.304879   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:07:30.304951   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:07:30.304969   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:07:30.305165   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:30.305179   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:30.305231   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:07:30.305307   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:07:30.305320   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:07:31.257933   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:07:31.258008   20215 controller.go:555] controller loop complete\nI1011 05:07:33.779959   20215 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI1011 05:07:33.836831   20215 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI1011 05:07:33.888821   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:33.888887   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:07:41.259181   20215 controller.go:187] starting controller iteration\nI1011 05:07:41.259205   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:41.259510   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:07:41.259638   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:41.260160   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:07:41.274423   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:07:41.274489   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:07:41.274504   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:07:41.274705   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:41.274721   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:41.274769   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:07:41.274848   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:07:41.274863   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:07:42.529128   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:07:42.529199   20215 controller.go:555] controller loop complete\nI1011 05:07:52.530765   20215 controller.go:187] starting controller iteration\nI1011 05:07:52.530791   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:52.531036   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:07:52.531241   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:07:52.531678   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:07:52.543471   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:07:52.543545   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:07:52.543561   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:07:52.543749   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:52.543763   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:07:52.543811   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:07:52.543869   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:07:52.543884   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:07:53.499145   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:07:53.499224   20215 controller.go:555] controller loop complete\nI1011 05:08:03.500683   20215 controller.go:187] starting controller iteration\nI1011 05:08:03.500709   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:03.501016   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:08:03.501155   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:03.501645   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:08:03.513420   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:08:03.513501   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:08:03.513526   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:08:03.513693   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:03.513709   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:03.513775   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:08:03.513956   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:08:03.513973   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:08:04.456955   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:08:04.457026   20215 controller.go:555] controller loop complete\nI1011 05:08:14.459710   20215 controller.go:187] starting controller iteration\nI1011 05:08:14.459738   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:14.459976   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:08:14.460090   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:14.460971   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:08:14.485789   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:08:14.485868   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:08:14.485883   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:08:14.486087   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:14.486104   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:14.486159   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:08:14.486245   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:08:14.486261   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:08:15.428895   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:08:15.428969   20215 controller.go:555] controller loop complete\nI1011 05:08:25.431356   20215 controller.go:187] starting controller iteration\nI1011 05:08:25.431382   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:25.431646   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:08:25.431790   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:25.432193   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:08:25.443388   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:08:25.443452   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:08:25.443468   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:08:25.443685   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:25.443702   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:25.443751   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:08:25.443830   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:08:25.443843   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:08:26.397359   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:08:26.397435   20215 controller.go:555] controller loop complete\nI1011 05:08:33.889902   20215 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI1011 05:08:33.945397   20215 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI1011 05:08:33.979529   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:33.979613   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:08:36.399580   20215 controller.go:187] starting controller iteration\nI1011 05:08:36.399606   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:36.399868   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:08:36.400071   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:36.400989   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:08:36.414707   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:08:36.414782   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:08:36.414797   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:08:36.414997   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:36.415013   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:36.415063   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:08:36.415158   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:08:36.415172   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:08:37.344874   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:08:37.344946   20215 controller.go:555] controller loop complete\nI1011 05:08:47.346918   20215 controller.go:187] starting controller iteration\nI1011 05:08:47.346944   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:47.347240   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:08:47.347449   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:47.347846   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:08:47.359320   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:08:47.359397   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:08:47.359414   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:08:47.359629   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:47.359645   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:47.359720   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI1011 05:08:47.359803   20215 commands.go:38] not refreshing commands - TTL not hit\nI1011 05:08:47.359818   20215 s3fs.go:327] Reading file \"s3://k8s-kops-prow/e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI1011 05:08:48.302427   20215 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI1011 05:08:48.302504   20215 controller.go:555] controller loop complete\nI1011 05:08:58.304775   20215 controller.go:187] starting controller iteration\nI1011 05:08:58.304803   20215 controller.go:264] Broadcasting leadership assertion with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:58.305079   20215 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > leadership_token:\"9Ik3Fpz9rRdqcpAWu4W_Aw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" > > \nI1011 05:08:58.305219   20215 controller.go:293] I am leader with token \"9Ik3Fpz9rRdqcpAWu4W_Aw\"\nI1011 05:08:58.306004   20215 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002]\nI1011 05:08:58.319525   20215 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.34.237:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"_eCdnzjj0CiDN6VvK_c8JA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI1011 05:08:58.319609   20215 controller.go:301] etcd cluster members: map[11881458874961738831:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:4002\"],\"ID\":\"11881458874961738831\"}]\nI1011 05:08:58.319626   20215 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io\" addresses:\"172.20.34.237\" > \nI1011 05:08:58.319825   20215 etcdserver.go:248] updating hosts: map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:58.319841   20215 hosts.go:84] hosts update: primary=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io:[172.20.34.237 172.20.34.237]], final=map[172.20.34.237:[etcd-events-a.internal.e2e-cf879872f7-ac31a.test-cncf-aws.k8s.io]]\nI1011 05:08:58.319878   20215 hosts.go:181] skipping update of unchanged /etc/hosts\nI101