This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-22 08:51
Elapsed38m42s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 132 lines ...
I0922 08:52:06.460950    4648 http.go:37] curl https://storage.googleapis.com/kops-ci/markers/release-1.22/latest-ci-updown-green.txt
I0922 08:52:06.462831    4648 http.go:37] curl https://storage.googleapis.com/k8s-staging-kops/kops/releases/1.22.0-beta.2+v1.22.0-beta.1-99-g92400cd674/linux/amd64/kops
I0922 08:52:07.284154    4648 up.go:43] Cleaning up any leaked resources from previous cluster
I0922 08:52:07.284202    4648 dumplogs.go:40] /logs/artifacts/28eb9d98-1b82-11ec-9c88-aaad778dd704/kops toolbox dump --name e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user
I0922 08:52:07.302691    4668 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0922 08:52:07.303556    4668 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io" not found
W0922 08:52:07.803135    4648 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0922 08:52:07.803191    4648 down.go:48] /logs/artifacts/28eb9d98-1b82-11ec-9c88-aaad778dd704/kops delete cluster --name e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --yes
I0922 08:52:07.820679    4678 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0922 08:52:07.821325    4678 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io" not found
I0922 08:52:08.322584    4648 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/22 08:52:08 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0922 08:52:08.328943    4648 http.go:37] curl https://ip.jsb.workers.dev
I0922 08:52:08.476542    4648 up.go:144] /logs/artifacts/28eb9d98-1b82-11ec-9c88-aaad778dd704/kops create cluster --name e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.5 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=amazon/amzn2-ami-hvm-2.0.20210813.1-x86_64-gp2 --channel=alpha --networking=flannel --container-runtime=containerd --admin-access 35.225.117.235/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones sa-east-1a --master-size c5.large
I0922 08:52:08.496104    4688 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0922 08:52:08.496180    4688 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I0922 08:52:08.518579    4688 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0922 08:52:09.017868    4688 new_cluster.go:1055]  Cloud Provider ID = aws
... skipping 41 lines ...

I0922 08:52:36.939585    4648 up.go:181] /logs/artifacts/28eb9d98-1b82-11ec-9c88-aaad778dd704/kops validate cluster --name e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0922 08:52:36.957556    4708 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0922 08:52:36.957661    4708 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io

W0922 08:52:38.435574    4708 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:52:48.472112    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:52:58.507910    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:53:08.538131    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:53:18.571853    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:53:28.600714    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:53:38.638943    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:53:48.668028    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:53:58.699112    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:54:08.727115    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:54:18.756952    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:54:28.789860    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:54:38.836187    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:54:48.873314    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:54:58.904318    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:55:08.943131    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:55:18.977897    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:55:29.010015    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:55:39.038405    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:55:49.074351    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:55:59.122071    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:56:09.166207    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0922 08:56:19.210161    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
W0922 08:56:59.246968    4708 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 203.0.113.123:443: i/o timeout
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
... skipping 6 lines ...
VALIDATION ERRORS
KIND	NAME						MESSAGE
Machine	i-021f849308fc74b8f				machine "i-021f849308fc74b8f" has not yet joined cluster
Node	ip-172-20-50-246.sa-east-1.compute.internal	node "ip-172-20-50-246.sa-east-1.compute.internal" of role "node" is not ready
Pod	kube-system/kube-flannel-ds-zk5bm		system-node-critical pod "kube-flannel-ds-zk5bm" is pending

Validation Failed
W0922 08:57:12.879592    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 36 lines ...
ip-172-20-52-88.sa-east-1.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/kube-flannel-ds-zk5bm	system-node-critical pod "kube-flannel-ds-zk5bm" is not ready (kube-flannel)

Validation Failed
W0922 08:57:50.194233    4708 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 719 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 565 lines ...
STEP: Destroying namespace "apply-66" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 2 lines ...
Sep 22 09:00:23.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
W0922 09:00:25.807758    5316 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 22 09:00:25.807: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Sep 22 09:00:25.954: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Sep 22 09:00:26.528: INFO: found topology map[topology.kubernetes.io/zone:sa-east-1a]
Sep 22 09:00:26.529: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 22 09:00:26.529: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:00:31.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9443" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:31.573: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 94 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when the NodeLease feature is enabled
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49
    the kubelet should report node status infrequently
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":1,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:36.341: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
Sep 22 09:00:24.520: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-9751e8fe-398d-474e-b55d-aefd7c2b8cc3
STEP: Creating a pod to test consume secrets
Sep 22 09:00:25.100: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74" in namespace "projected-6586" to be "Succeeded or Failed"
Sep 22 09:00:25.243: INFO: Pod "pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74": Phase="Pending", Reason="", readiness=false. Elapsed: 143.576197ms
Sep 22 09:00:27.405: INFO: Pod "pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305276443s
Sep 22 09:00:29.549: INFO: Pod "pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449572322s
Sep 22 09:00:31.708: INFO: Pod "pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.608169373s
Sep 22 09:00:33.853: INFO: Pod "pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74": Phase="Pending", Reason="", readiness=false. Elapsed: 8.753184009s
Sep 22 09:00:35.997: INFO: Pod "pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.897915518s
STEP: Saw pod success
Sep 22 09:00:35.998: INFO: Pod "pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74" satisfied condition "Succeeded or Failed"
Sep 22 09:00:36.141: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 22 09:00:36.448: INFO: Waiting for pod pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74 to disappear
Sep 22 09:00:36.593: INFO: Pod pod-projected-secrets-16eed13f-65f6-4632-9640-21ac47a19d74 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:13.226 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:37.039: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 129 lines ...
Sep 22 09:00:24.584: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 22 09:00:25.015: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec" in namespace "security-context-test-9265" to be "Succeeded or Failed"
Sep 22 09:00:25.159: INFO: Pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec": Phase="Pending", Reason="", readiness=false. Elapsed: 143.496259ms
Sep 22 09:00:27.317: INFO: Pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301268471s
Sep 22 09:00:29.469: INFO: Pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453445979s
Sep 22 09:00:31.617: INFO: Pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601946669s
Sep 22 09:00:33.795: INFO: Pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.779782331s
Sep 22 09:00:35.940: INFO: Pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec": Phase="Pending", Reason="", readiness=false. Elapsed: 10.924495711s
Sep 22 09:00:38.084: INFO: Pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec": Phase="Pending", Reason="", readiness=false. Elapsed: 13.068558798s
Sep 22 09:00:40.229: INFO: Pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.213466483s
Sep 22 09:00:40.229: INFO: Pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec" satisfied condition "Succeeded or Failed"
Sep 22 09:00:40.386: INFO: Got logs for pod "busybox-privileged-false-57c525ac-0e22-49c5-93ff-a2f8350149ec": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:00:40.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9265" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:40.828: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 94 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 10 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:00:25.056: INFO: Waiting up to 5m0s for pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055" in namespace "projected-8553" to be "Succeeded or Failed"
Sep 22 09:00:25.202: INFO: Pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055": Phase="Pending", Reason="", readiness=false. Elapsed: 146.154439ms
Sep 22 09:00:27.358: INFO: Pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302496086s
Sep 22 09:00:29.504: INFO: Pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448005308s
Sep 22 09:00:31.650: INFO: Pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055": Phase="Pending", Reason="", readiness=false. Elapsed: 6.594114181s
Sep 22 09:00:33.799: INFO: Pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743051379s
Sep 22 09:00:35.943: INFO: Pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055": Phase="Pending", Reason="", readiness=false. Elapsed: 10.887043173s
Sep 22 09:00:38.088: INFO: Pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055": Phase="Pending", Reason="", readiness=false. Elapsed: 13.032005349s
Sep 22 09:00:40.234: INFO: Pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.178451085s
STEP: Saw pod success
Sep 22 09:00:40.234: INFO: Pod "downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055" satisfied condition "Succeeded or Failed"
Sep 22 09:00:40.380: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055 container client-container: <nil>
STEP: delete the pod
Sep 22 09:00:40.688: INFO: Waiting for pod downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055 to disappear
Sep 22 09:00:40.831: INFO: Pod downwardapi-volume-634c3628-f4a5-400b-b77e-ea493a15f055 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:17.384 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:41.281: INFO: Only supported for providers [openstack] (not aws)
... skipping 131 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:00:37.917: INFO: Waiting up to 5m0s for pod "downwardapi-volume-815e1e0b-cbd5-42fa-94e9-6963b611dd6d" in namespace "projected-6928" to be "Succeeded or Failed"
Sep 22 09:00:38.061: INFO: Pod "downwardapi-volume-815e1e0b-cbd5-42fa-94e9-6963b611dd6d": Phase="Pending", Reason="", readiness=false. Elapsed: 144.01242ms
Sep 22 09:00:40.206: INFO: Pod "downwardapi-volume-815e1e0b-cbd5-42fa-94e9-6963b611dd6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288951008s
Sep 22 09:00:42.354: INFO: Pod "downwardapi-volume-815e1e0b-cbd5-42fa-94e9-6963b611dd6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.43631432s
STEP: Saw pod success
Sep 22 09:00:42.354: INFO: Pod "downwardapi-volume-815e1e0b-cbd5-42fa-94e9-6963b611dd6d" satisfied condition "Succeeded or Failed"
Sep 22 09:00:42.500: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod downwardapi-volume-815e1e0b-cbd5-42fa-94e9-6963b611dd6d container client-container: <nil>
STEP: delete the pod
Sep 22 09:00:42.794: INFO: Waiting for pod downwardapi-volume-815e1e0b-cbd5-42fa-94e9-6963b611dd6d to disappear
Sep 22 09:00:42.941: INFO: Pod downwardapi-volume-815e1e0b-cbd5-42fa-94e9-6963b611dd6d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.184 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:43.239: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
Sep 22 09:00:25.538: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-ec321a03-897e-4b0c-80f2-515f22cd4352
STEP: Creating a pod to test consume configMaps
Sep 22 09:00:26.116: INFO: Waiting up to 5m0s for pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b" in namespace "configmap-2866" to be "Succeeded or Failed"
Sep 22 09:00:26.260: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b": Phase="Pending", Reason="", readiness=false. Elapsed: 143.864467ms
Sep 22 09:00:28.406: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289881751s
Sep 22 09:00:30.552: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435410648s
Sep 22 09:00:32.696: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579743585s
Sep 22 09:00:34.842: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725258973s
Sep 22 09:00:36.986: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.870085055s
Sep 22 09:00:39.132: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.015423905s
Sep 22 09:00:41.277: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.16057717s
Sep 22 09:00:43.421: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.305007378s
STEP: Saw pod success
Sep 22 09:00:43.421: INFO: Pod "pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b" satisfied condition "Succeeded or Failed"
Sep 22 09:00:43.566: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b container agnhost-container: <nil>
STEP: delete the pod
Sep 22 09:00:43.876: INFO: Waiting for pod pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b to disappear
Sep 22 09:00:44.020: INFO: Pod pod-configmaps-39de0f0d-7093-4e53-ae27-3a11e0fc464b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 96 lines ...
• [SLOW TEST:21.548 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
Sep 22 09:00:25.767: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Sep 22 09:00:26.201: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3004" to be "Succeeded or Failed"
Sep 22 09:00:26.345: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 143.686707ms
Sep 22 09:00:28.490: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28853959s
Sep 22 09:00:30.634: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43247691s
Sep 22 09:00:32.779: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578142654s
Sep 22 09:00:34.923: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721963013s
Sep 22 09:00:37.067: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.865771838s
Sep 22 09:00:39.211: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 13.010028406s
Sep 22 09:00:41.358: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 15.157114754s
Sep 22 09:00:43.504: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 17.303070586s
Sep 22 09:00:45.649: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.447451974s
Sep 22 09:00:45.649: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:00:45.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3004" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:46.254: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:00:46.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4828" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:47.269: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:00:44.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Sep 22 09:00:45.331: INFO: Waiting up to 5m0s for pod "var-expansion-98abca6e-8774-4766-be1b-ec217bdc75a1" in namespace "var-expansion-3173" to be "Succeeded or Failed"
Sep 22 09:00:45.475: INFO: Pod "var-expansion-98abca6e-8774-4766-be1b-ec217bdc75a1": Phase="Pending", Reason="", readiness=false. Elapsed: 144.194807ms
Sep 22 09:00:47.620: INFO: Pod "var-expansion-98abca6e-8774-4766-be1b-ec217bdc75a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.28903959s
STEP: Saw pod success
Sep 22 09:00:47.620: INFO: Pod "var-expansion-98abca6e-8774-4766-be1b-ec217bdc75a1" satisfied condition "Succeeded or Failed"
Sep 22 09:00:47.764: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod var-expansion-98abca6e-8774-4766-be1b-ec217bdc75a1 container dapi-container: <nil>
STEP: delete the pod
Sep 22 09:00:48.066: INFO: Waiting for pod var-expansion-98abca6e-8774-4766-be1b-ec217bdc75a1 to disappear
Sep 22 09:00:48.209: INFO: Pod var-expansion-98abca6e-8774-4766-be1b-ec217bdc75a1 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:00:48.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3173" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:48.518: INFO: Only supported for providers [openstack] (not aws)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:00:45.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:00:48.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-2139" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:49.009: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
Sep 22 09:00:36.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Sep 22 09:00:37.258: INFO: Waiting up to 5m0s for pod "var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad" in namespace "var-expansion-6484" to be "Succeeded or Failed"
Sep 22 09:00:37.402: INFO: Pod "var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad": Phase="Pending", Reason="", readiness=false. Elapsed: 144.104146ms
Sep 22 09:00:39.547: INFO: Pod "var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289304745s
Sep 22 09:00:41.692: INFO: Pod "var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43381884s
Sep 22 09:00:43.868: INFO: Pod "var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.609823974s
Sep 22 09:00:46.014: INFO: Pod "var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.756081168s
Sep 22 09:00:48.159: INFO: Pod "var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.901346408s
STEP: Saw pod success
Sep 22 09:00:48.159: INFO: Pod "var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad" satisfied condition "Succeeded or Failed"
Sep 22 09:00:48.306: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad container dapi-container: <nil>
STEP: delete the pod
Sep 22 09:00:48.598: INFO: Waiting for pod var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad to disappear
Sep 22 09:00:48.742: INFO: Pod var-expansion-421fc071-b0e3-48ee-9542-87a61af55fad no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 15 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:00:44.149: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1ff19fc-1e60-4038-aec7-bc5e589b81b0" in namespace "projected-5703" to be "Succeeded or Failed"
Sep 22 09:00:44.293: INFO: Pod "downwardapi-volume-d1ff19fc-1e60-4038-aec7-bc5e589b81b0": Phase="Pending", Reason="", readiness=false. Elapsed: 143.658736ms
Sep 22 09:00:46.446: INFO: Pod "downwardapi-volume-d1ff19fc-1e60-4038-aec7-bc5e589b81b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297320449s
Sep 22 09:00:48.590: INFO: Pod "downwardapi-volume-d1ff19fc-1e60-4038-aec7-bc5e589b81b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.441413097s
STEP: Saw pod success
Sep 22 09:00:48.590: INFO: Pod "downwardapi-volume-d1ff19fc-1e60-4038-aec7-bc5e589b81b0" satisfied condition "Succeeded or Failed"
Sep 22 09:00:48.734: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod downwardapi-volume-d1ff19fc-1e60-4038-aec7-bc5e589b81b0 container client-container: <nil>
STEP: delete the pod
Sep 22 09:00:49.026: INFO: Waiting for pod downwardapi-volume-d1ff19fc-1e60-4038-aec7-bc5e589b81b0 to disappear
Sep 22 09:00:49.171: INFO: Pod downwardapi-volume-d1ff19fc-1e60-4038-aec7-bc5e589b81b0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.206 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:00:49.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-dab50512-c910-4f5b-bce5-d99e859dca3c
STEP: Creating a pod to test consume configMaps
Sep 22 09:00:50.069: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a4958aa-70c3-4561-b42d-d67f2e3c76da" in namespace "configmap-6686" to be "Succeeded or Failed"
Sep 22 09:00:50.212: INFO: Pod "pod-configmaps-7a4958aa-70c3-4561-b42d-d67f2e3c76da": Phase="Pending", Reason="", readiness=false. Elapsed: 143.852308ms
Sep 22 09:00:52.374: INFO: Pod "pod-configmaps-7a4958aa-70c3-4561-b42d-d67f2e3c76da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.305362719s
STEP: Saw pod success
Sep 22 09:00:52.374: INFO: Pod "pod-configmaps-7a4958aa-70c3-4561-b42d-d67f2e3c76da" satisfied condition "Succeeded or Failed"
Sep 22 09:00:52.518: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-configmaps-7a4958aa-70c3-4561-b42d-d67f2e3c76da container agnhost-container: <nil>
STEP: delete the pod
Sep 22 09:00:52.812: INFO: Waiting for pod pod-configmaps-7a4958aa-70c3-4561-b42d-d67f2e3c76da to disappear
Sep 22 09:00:52.956: INFO: Pod pod-configmaps-7a4958aa-70c3-4561-b42d-d67f2e3c76da no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:00:52.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6686" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:53.277: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 74 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Sep 22 09:00:48.030: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 22 09:00:48.030: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-2k2q
STEP: Creating a pod to test subpath
Sep 22 09:00:48.179: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-2k2q" in namespace "provisioning-7141" to be "Succeeded or Failed"
Sep 22 09:00:48.323: INFO: Pod "pod-subpath-test-inlinevolume-2k2q": Phase="Pending", Reason="", readiness=false. Elapsed: 143.812305ms
Sep 22 09:00:50.467: INFO: Pod "pod-subpath-test-inlinevolume-2k2q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288083291s
Sep 22 09:00:52.612: INFO: Pod "pod-subpath-test-inlinevolume-2k2q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432517932s
STEP: Saw pod success
Sep 22 09:00:52.612: INFO: Pod "pod-subpath-test-inlinevolume-2k2q" satisfied condition "Succeeded or Failed"
Sep 22 09:00:52.755: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-2k2q container test-container-subpath-inlinevolume-2k2q: <nil>
STEP: delete the pod
Sep 22 09:00:53.053: INFO: Waiting for pod pod-subpath-test-inlinevolume-2k2q to disappear
Sep 22 09:00:53.197: INFO: Pod pod-subpath-test-inlinevolume-2k2q no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-2k2q
Sep 22 09:00:53.197: INFO: Deleting pod "pod-subpath-test-inlinevolume-2k2q" in namespace "provisioning-7141"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":13,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:00:55.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-5439" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":4,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:55.488: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 6 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Sep 22 09:00:49.899: INFO: Waiting up to 5m0s for pod "pod-always-succeedf4c3538a-ff77-42ea-8dea-ec1a8d533f16" in namespace "pods-9051" to be "Succeeded or Failed"
Sep 22 09:00:50.042: INFO: Pod "pod-always-succeedf4c3538a-ff77-42ea-8dea-ec1a8d533f16": Phase="Pending", Reason="", readiness=false. Elapsed: 143.647347ms
Sep 22 09:00:52.187: INFO: Pod "pod-always-succeedf4c3538a-ff77-42ea-8dea-ec1a8d533f16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287850444s
Sep 22 09:00:54.333: INFO: Pod "pod-always-succeedf4c3538a-ff77-42ea-8dea-ec1a8d533f16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434250115s
STEP: Saw pod success
Sep 22 09:00:54.333: INFO: Pod "pod-always-succeedf4c3538a-ff77-42ea-8dea-ec1a8d533f16" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:00:56.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":3,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Sep 22 09:00:44.280: INFO: PersistentVolumeClaim pvc-gjws8 found but phase is Pending instead of Bound.
Sep 22 09:00:46.428: INFO: PersistentVolumeClaim pvc-gjws8 found and phase=Bound (4.436397469s)
Sep 22 09:00:46.428: INFO: Waiting up to 3m0s for PersistentVolume local-gsmx6 to have phase Bound
Sep 22 09:00:46.573: INFO: PersistentVolume local-gsmx6 found and phase=Bound (145.257132ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zqc8
STEP: Creating a pod to test subpath
Sep 22 09:00:47.006: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zqc8" in namespace "provisioning-8147" to be "Succeeded or Failed"
Sep 22 09:00:47.150: INFO: Pod "pod-subpath-test-preprovisionedpv-zqc8": Phase="Pending", Reason="", readiness=false. Elapsed: 143.805683ms
Sep 22 09:00:49.295: INFO: Pod "pod-subpath-test-preprovisionedpv-zqc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289003343s
Sep 22 09:00:51.440: INFO: Pod "pod-subpath-test-preprovisionedpv-zqc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433478353s
Sep 22 09:00:53.584: INFO: Pod "pod-subpath-test-preprovisionedpv-zqc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577633866s
Sep 22 09:00:55.728: INFO: Pod "pod-subpath-test-preprovisionedpv-zqc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.722086683s
STEP: Saw pod success
Sep 22 09:00:55.728: INFO: Pod "pod-subpath-test-preprovisionedpv-zqc8" satisfied condition "Succeeded or Failed"
Sep 22 09:00:55.874: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-zqc8 container test-container-subpath-preprovisionedpv-zqc8: <nil>
STEP: delete the pod
Sep 22 09:00:56.168: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zqc8 to disappear
Sep 22 09:00:56.313: INFO: Pod pod-subpath-test-preprovisionedpv-zqc8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zqc8
Sep 22 09:00:56.314: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zqc8" in namespace "provisioning-8147"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 24 lines ...
Sep 22 09:00:43.913: INFO: PersistentVolumeClaim pvc-k9bqh found but phase is Pending instead of Bound.
Sep 22 09:00:46.056: INFO: PersistentVolumeClaim pvc-k9bqh found and phase=Bound (4.435518753s)
Sep 22 09:00:46.056: INFO: Waiting up to 3m0s for PersistentVolume local-sx65m to have phase Bound
Sep 22 09:00:46.200: INFO: PersistentVolume local-sx65m found and phase=Bound (143.470617ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-78xp
STEP: Creating a pod to test exec-volume-test
Sep 22 09:00:46.633: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-78xp" in namespace "volume-7126" to be "Succeeded or Failed"
Sep 22 09:00:46.779: INFO: Pod "exec-volume-test-preprovisionedpv-78xp": Phase="Pending", Reason="", readiness=false. Elapsed: 146.286253ms
Sep 22 09:00:48.924: INFO: Pod "exec-volume-test-preprovisionedpv-78xp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290894936s
Sep 22 09:00:51.068: INFO: Pod "exec-volume-test-preprovisionedpv-78xp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43483797s
Sep 22 09:00:53.212: INFO: Pod "exec-volume-test-preprovisionedpv-78xp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579158446s
STEP: Saw pod success
Sep 22 09:00:53.212: INFO: Pod "exec-volume-test-preprovisionedpv-78xp" satisfied condition "Succeeded or Failed"
Sep 22 09:00:53.356: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-78xp container exec-container-preprovisionedpv-78xp: <nil>
STEP: delete the pod
Sep 22 09:00:53.649: INFO: Waiting for pod exec-volume-test-preprovisionedpv-78xp to disappear
Sep 22 09:00:53.794: INFO: Pod exec-volume-test-preprovisionedpv-78xp no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-78xp
Sep 22 09:00:53.794: INFO: Deleting pod "exec-volume-test-preprovisionedpv-78xp" in namespace "volume-7126"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 8 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Sep 22 09:00:24.909: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 22 09:00:25.195: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8z6z
STEP: Creating a pod to test subpath
Sep 22 09:00:25.342: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8z6z" in namespace "provisioning-4291" to be "Succeeded or Failed"
Sep 22 09:00:25.498: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 155.682453ms
Sep 22 09:00:27.642: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300154707s
Sep 22 09:00:29.787: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.444855773s
Sep 22 09:00:31.948: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.606448151s
Sep 22 09:00:34.094: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.751661481s
Sep 22 09:00:36.238: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.89620806s
... skipping 5 lines ...
Sep 22 09:00:49.107: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 23.765447022s
Sep 22 09:00:51.253: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 25.910904749s
Sep 22 09:00:53.397: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 28.054756899s
Sep 22 09:00:55.541: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Pending", Reason="", readiness=false. Elapsed: 30.198898743s
Sep 22 09:00:57.686: INFO: Pod "pod-subpath-test-inlinevolume-8z6z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.343704558s
STEP: Saw pod success
Sep 22 09:00:57.686: INFO: Pod "pod-subpath-test-inlinevolume-8z6z" satisfied condition "Succeeded or Failed"
Sep 22 09:00:57.830: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-8z6z container test-container-subpath-inlinevolume-8z6z: <nil>
STEP: delete the pod
Sep 22 09:00:58.133: INFO: Waiting for pod pod-subpath-test-inlinevolume-8z6z to disappear
Sep 22 09:00:58.276: INFO: Pod pod-subpath-test-inlinevolume-8z6z no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8z6z
Sep 22 09:00:58.277: INFO: Deleting pod "pod-subpath-test-inlinevolume-8z6z" in namespace "provisioning-4291"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:58.871: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 120 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1027
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:58.962: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:00:56.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c83ce59-67ed-4552-bcb7-260a58bc9b98" in namespace "downward-api-1105" to be "Succeeded or Failed"
Sep 22 09:00:56.530: INFO: Pod "downwardapi-volume-4c83ce59-67ed-4552-bcb7-260a58bc9b98": Phase="Pending", Reason="", readiness=false. Elapsed: 144.118294ms
Sep 22 09:00:58.675: INFO: Pod "downwardapi-volume-4c83ce59-67ed-4552-bcb7-260a58bc9b98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289109352s
STEP: Saw pod success
Sep 22 09:00:58.675: INFO: Pod "downwardapi-volume-4c83ce59-67ed-4552-bcb7-260a58bc9b98" satisfied condition "Succeeded or Failed"
Sep 22 09:00:58.821: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod downwardapi-volume-4c83ce59-67ed-4552-bcb7-260a58bc9b98 container client-container: <nil>
STEP: delete the pod
Sep 22 09:00:59.124: INFO: Waiting for pod downwardapi-volume-4c83ce59-67ed-4552-bcb7-260a58bc9b98 to disappear
Sep 22 09:00:59.267: INFO: Pod downwardapi-volume-4c83ce59-67ed-4552-bcb7-260a58bc9b98 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-2x8d
STEP: Creating a pod to test atomic-volume-subpath
Sep 22 09:00:34.963: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2x8d" in namespace "subpath-6637" to be "Succeeded or Failed"
Sep 22 09:00:35.106: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Pending", Reason="", readiness=false. Elapsed: 143.133832ms
Sep 22 09:00:37.250: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 2.287028164s
Sep 22 09:00:39.395: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 4.431601962s
Sep 22 09:00:41.538: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 6.57557917s
Sep 22 09:00:43.689: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 8.725659734s
Sep 22 09:00:45.833: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 10.870476538s
Sep 22 09:00:47.979: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 13.01573066s
Sep 22 09:00:50.125: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 15.161648994s
Sep 22 09:00:52.269: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 17.30616744s
Sep 22 09:00:54.413: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 19.4503453s
Sep 22 09:00:56.557: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Running", Reason="", readiness=true. Elapsed: 21.594484589s
Sep 22 09:00:58.701: INFO: Pod "pod-subpath-test-downwardapi-2x8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.738025647s
STEP: Saw pod success
Sep 22 09:00:58.701: INFO: Pod "pod-subpath-test-downwardapi-2x8d" satisfied condition "Succeeded or Failed"
Sep 22 09:00:58.845: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-downwardapi-2x8d container test-container-subpath-downwardapi-2x8d: <nil>
STEP: delete the pod
Sep 22 09:00:59.136: INFO: Waiting for pod pod-subpath-test-downwardapi-2x8d to disappear
Sep 22 09:00:59.280: INFO: Pod pod-subpath-test-downwardapi-2x8d no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-2x8d
Sep 22 09:00:59.280: INFO: Deleting pod "pod-subpath-test-downwardapi-2x8d" in namespace "subpath-6637"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:59.761: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 151 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:00:59.861: INFO: Only supported for providers [gce gke] (not aws)
... skipping 68 lines ...
Sep 22 09:00:44.048: INFO: PersistentVolumeClaim pvc-l92v4 found but phase is Pending instead of Bound.
Sep 22 09:00:46.192: INFO: PersistentVolumeClaim pvc-l92v4 found and phase=Bound (4.432603543s)
Sep 22 09:00:46.192: INFO: Waiting up to 3m0s for PersistentVolume local-5cbh2 to have phase Bound
Sep 22 09:00:46.337: INFO: PersistentVolume local-5cbh2 found and phase=Bound (144.119116ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-v8vb
STEP: Creating a pod to test subpath
Sep 22 09:00:46.769: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-v8vb" in namespace "provisioning-4784" to be "Succeeded or Failed"
Sep 22 09:00:46.913: INFO: Pod "pod-subpath-test-preprovisionedpv-v8vb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.736132ms
Sep 22 09:00:49.057: INFO: Pod "pod-subpath-test-preprovisionedpv-v8vb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288374782s
Sep 22 09:00:51.202: INFO: Pod "pod-subpath-test-preprovisionedpv-v8vb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433286504s
STEP: Saw pod success
Sep 22 09:00:51.202: INFO: Pod "pod-subpath-test-preprovisionedpv-v8vb" satisfied condition "Succeeded or Failed"
Sep 22 09:00:51.346: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-v8vb container test-container-subpath-preprovisionedpv-v8vb: <nil>
STEP: delete the pod
Sep 22 09:00:51.645: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-v8vb to disappear
Sep 22 09:00:51.789: INFO: Pod pod-subpath-test-preprovisionedpv-v8vb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-v8vb
Sep 22 09:00:51.789: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-v8vb" in namespace "provisioning-4784"
STEP: Creating pod pod-subpath-test-preprovisionedpv-v8vb
STEP: Creating a pod to test subpath
Sep 22 09:00:52.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-v8vb" in namespace "provisioning-4784" to be "Succeeded or Failed"
Sep 22 09:00:52.222: INFO: Pod "pod-subpath-test-preprovisionedpv-v8vb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.563541ms
Sep 22 09:00:54.367: INFO: Pod "pod-subpath-test-preprovisionedpv-v8vb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288987779s
STEP: Saw pod success
Sep 22 09:00:54.367: INFO: Pod "pod-subpath-test-preprovisionedpv-v8vb" satisfied condition "Succeeded or Failed"
Sep 22 09:00:54.511: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-v8vb container test-container-subpath-preprovisionedpv-v8vb: <nil>
STEP: delete the pod
Sep 22 09:00:54.806: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-v8vb to disappear
Sep 22 09:00:54.949: INFO: Pod pod-subpath-test-preprovisionedpv-v8vb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-v8vb
Sep 22 09:00:54.950: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-v8vb" in namespace "provisioning-4784"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":8,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:00.012: INFO: Only supported for providers [openstack] (not aws)
... skipping 188 lines ...
Sep 22 09:00:24.857: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-93187dc74
STEP: creating a claim
Sep 22 09:00:25.002: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-mmct
STEP: Creating a pod to test subpath
Sep 22 09:00:25.445: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mmct" in namespace "provisioning-9318" to be "Succeeded or Failed"
Sep 22 09:00:25.607: INFO: Pod "pod-subpath-test-dynamicpv-mmct": Phase="Pending", Reason="", readiness=false. Elapsed: 161.197621ms
Sep 22 09:00:27.752: INFO: Pod "pod-subpath-test-dynamicpv-mmct": Phase="Pending", Reason="", readiness=false. Elapsed: 2.306496643s
Sep 22 09:00:29.898: INFO: Pod "pod-subpath-test-dynamicpv-mmct": Phase="Pending", Reason="", readiness=false. Elapsed: 4.452586522s
Sep 22 09:00:32.052: INFO: Pod "pod-subpath-test-dynamicpv-mmct": Phase="Pending", Reason="", readiness=false. Elapsed: 6.606805228s
Sep 22 09:00:34.197: INFO: Pod "pod-subpath-test-dynamicpv-mmct": Phase="Pending", Reason="", readiness=false. Elapsed: 8.751406475s
Sep 22 09:00:36.344: INFO: Pod "pod-subpath-test-dynamicpv-mmct": Phase="Pending", Reason="", readiness=false. Elapsed: 10.898399614s
Sep 22 09:00:38.489: INFO: Pod "pod-subpath-test-dynamicpv-mmct": Phase="Pending", Reason="", readiness=false. Elapsed: 13.043573437s
Sep 22 09:00:40.636: INFO: Pod "pod-subpath-test-dynamicpv-mmct": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.19023641s
STEP: Saw pod success
Sep 22 09:00:40.636: INFO: Pod "pod-subpath-test-dynamicpv-mmct" satisfied condition "Succeeded or Failed"
Sep 22 09:00:40.779: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-mmct container test-container-volume-dynamicpv-mmct: <nil>
STEP: delete the pod
Sep 22 09:00:41.084: INFO: Waiting for pod pod-subpath-test-dynamicpv-mmct to disappear
Sep 22 09:00:41.227: INFO: Pod pod-subpath-test-dynamicpv-mmct no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-mmct
Sep 22 09:00:41.227: INFO: Deleting pod "pod-subpath-test-dynamicpv-mmct" in namespace "provisioning-9318"
... skipping 49 lines ...
• [SLOW TEST:41.628 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1055
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:05.575: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-n9mf
STEP: Creating a pod to test atomic-volume-subpath
Sep 22 09:00:27.341: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-n9mf" in namespace "subpath-7065" to be "Succeeded or Failed"
Sep 22 09:00:27.488: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 146.816926ms
Sep 22 09:00:29.632: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29054722s
Sep 22 09:00:31.787: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.446139189s
Sep 22 09:00:33.939: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5981658s
Sep 22 09:00:36.084: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743021551s
Sep 22 09:00:38.228: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.886675964s
... skipping 8 lines ...
Sep 22 09:00:57.527: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Running", Reason="", readiness=true. Elapsed: 30.185919842s
Sep 22 09:00:59.694: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Running", Reason="", readiness=true. Elapsed: 32.353065861s
Sep 22 09:01:01.847: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Running", Reason="", readiness=true. Elapsed: 34.5061312s
Sep 22 09:01:03.997: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Running", Reason="", readiness=true. Elapsed: 36.655644741s
Sep 22 09:01:06.163: INFO: Pod "pod-subpath-test-secret-n9mf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.821383688s
STEP: Saw pod success
Sep 22 09:01:06.163: INFO: Pod "pod-subpath-test-secret-n9mf" satisfied condition "Succeeded or Failed"
Sep 22 09:01:06.340: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-secret-n9mf container test-container-subpath-secret-n9mf: <nil>
STEP: delete the pod
Sep 22 09:01:06.652: INFO: Waiting for pod pod-subpath-test-secret-n9mf to disappear
Sep 22 09:01:06.795: INFO: Pod pod-subpath-test-secret-n9mf no longer exists
STEP: Deleting pod pod-subpath-test-secret-n9mf
Sep 22 09:01:06.795: INFO: Deleting pod "pod-subpath-test-secret-n9mf" in namespace "subpath-7065"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:07.385: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 104 lines ...
Sep 22 09:00:58.666: INFO: stdout: "externalname-service-jwth2"
Sep 22 09:00:58.666: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-212 exec execpodpwljg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.94.131 80'
Sep 22 09:01:00.299: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.68.94.131 80\nConnection to 100.68.94.131 80 port [tcp/http] succeeded!\n"
Sep 22 09:01:00.299: INFO: stdout: ""
Sep 22 09:01:01.300: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-212 exec execpodpwljg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.94.131 80'
Sep 22 09:01:04.800: INFO: rc: 1
Sep 22 09:01:04.800: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-212 exec execpodpwljg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.94.131 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.68.94.131 80
nc: connect to 100.68.94.131 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:01:05.300: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-212 exec execpodpwljg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.94.131 80'
Sep 22 09:01:07.027: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.68.94.131 80\nConnection to 100.68.94.131 80 port [tcp/http] succeeded!\n"
Sep 22 09:01:07.027: INFO: stdout: "externalname-service-jwth2"
Sep 22 09:01:07.027: INFO: Cleaning up the ExternalName to ClusterIP test service
... skipping 8 lines ...
• [SLOW TEST:21.225 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:01:03.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 22 09:01:04.134: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a29dce0a-a42d-4761-84f2-fa642a8cc079" in namespace "security-context-test-6197" to be "Succeeded or Failed"
Sep 22 09:01:04.278: INFO: Pod "busybox-readonly-false-a29dce0a-a42d-4761-84f2-fa642a8cc079": Phase="Pending", Reason="", readiness=false. Elapsed: 143.757862ms
Sep 22 09:01:06.426: INFO: Pod "busybox-readonly-false-a29dce0a-a42d-4761-84f2-fa642a8cc079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292366267s
Sep 22 09:01:08.571: INFO: Pod "busybox-readonly-false-a29dce0a-a42d-4761-84f2-fa642a8cc079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437427178s
Sep 22 09:01:08.571: INFO: Pod "busybox-readonly-false-a29dce0a-a42d-4761-84f2-fa642a8cc079" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:08.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6197" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:08.885: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 45 lines ...
Sep 22 09:00:59.817: INFO: PersistentVolumeClaim pvc-8rmnw found but phase is Pending instead of Bound.
Sep 22 09:01:01.964: INFO: PersistentVolumeClaim pvc-8rmnw found and phase=Bound (15.162516365s)
Sep 22 09:01:01.964: INFO: Waiting up to 3m0s for PersistentVolume local-dj7lc to have phase Bound
Sep 22 09:01:02.108: INFO: PersistentVolume local-dj7lc found and phase=Bound (143.725472ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bknm
STEP: Creating a pod to test subpath
Sep 22 09:01:02.540: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bknm" in namespace "provisioning-5796" to be "Succeeded or Failed"
Sep 22 09:01:02.683: INFO: Pod "pod-subpath-test-preprovisionedpv-bknm": Phase="Pending", Reason="", readiness=false. Elapsed: 143.379409ms
Sep 22 09:01:04.829: INFO: Pod "pod-subpath-test-preprovisionedpv-bknm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289218212s
Sep 22 09:01:06.974: INFO: Pod "pod-subpath-test-preprovisionedpv-bknm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433857913s
Sep 22 09:01:09.123: INFO: Pod "pod-subpath-test-preprovisionedpv-bknm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.583034519s
STEP: Saw pod success
Sep 22 09:01:09.123: INFO: Pod "pod-subpath-test-preprovisionedpv-bknm" satisfied condition "Succeeded or Failed"
Sep 22 09:01:09.267: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-bknm container test-container-volume-preprovisionedpv-bknm: <nil>
STEP: delete the pod
Sep 22 09:01:09.562: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bknm to disappear
Sep 22 09:01:09.707: INFO: Pod pod-subpath-test-preprovisionedpv-bknm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bknm
Sep 22 09:01:09.707: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bknm" in namespace "provisioning-5796"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:11.724: INFO: Driver aws doesn't publish storage capacity -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/capacity.go:111

      Driver aws doesn't publish storage capacity -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/capacity.go:78
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:01:02.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
• [SLOW TEST:10.359 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:13.386: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":2,"skipped":20,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:01:07.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:01:08.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b63ed08-3f12-479a-b8e6-06e49569b4bb" in namespace "downward-api-8702" to be "Succeeded or Failed"
Sep 22 09:01:08.525: INFO: Pod "downwardapi-volume-5b63ed08-3f12-479a-b8e6-06e49569b4bb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.706939ms
Sep 22 09:01:10.670: INFO: Pod "downwardapi-volume-5b63ed08-3f12-479a-b8e6-06e49569b4bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288436856s
Sep 22 09:01:12.821: INFO: Pod "downwardapi-volume-5b63ed08-3f12-479a-b8e6-06e49569b4bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.439851825s
STEP: Saw pod success
Sep 22 09:01:12.821: INFO: Pod "downwardapi-volume-5b63ed08-3f12-479a-b8e6-06e49569b4bb" satisfied condition "Succeeded or Failed"
Sep 22 09:01:12.966: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod downwardapi-volume-5b63ed08-3f12-479a-b8e6-06e49569b4bb container client-container: <nil>
STEP: delete the pod
Sep 22 09:01:13.301: INFO: Waiting for pod downwardapi-volume-5b63ed08-3f12-479a-b8e6-06e49569b4bb to disappear
Sep 22 09:01:13.445: INFO: Pod downwardapi-volume-5b63ed08-3f12-479a-b8e6-06e49569b4bb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.227 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:14.319: INFO: Only supported for providers [gce gke] (not aws)
... skipping 64 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:33.182 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":2,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:14.509: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 71 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 22 09:01:12.625: INFO: Waiting up to 5m0s for pod "pod-6732cc72-aaa4-419d-8eab-ccf28d398c1a" in namespace "emptydir-3874" to be "Succeeded or Failed"
Sep 22 09:01:12.769: INFO: Pod "pod-6732cc72-aaa4-419d-8eab-ccf28d398c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 143.843604ms
Sep 22 09:01:14.913: INFO: Pod "pod-6732cc72-aaa4-419d-8eab-ccf28d398c1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287642498s
STEP: Saw pod success
Sep 22 09:01:14.913: INFO: Pod "pod-6732cc72-aaa4-419d-8eab-ccf28d398c1a" satisfied condition "Succeeded or Failed"
Sep 22 09:01:15.056: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-6732cc72-aaa4-419d-8eab-ccf28d398c1a container test-container: <nil>
STEP: delete the pod
Sep 22 09:01:15.351: INFO: Waiting for pod pod-6732cc72-aaa4-419d-8eab-ccf28d398c1a to disappear
Sep 22 09:01:15.495: INFO: Pod pod-6732cc72-aaa4-419d-8eab-ccf28d398c1a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:15.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3874" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":2,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:15.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-1941" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":4,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:16.505: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 90 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:17.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-1895" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":5,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:18.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-8722" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:18.361: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 72 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
Sep 22 09:01:15.307: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 22 09:01:15.307: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-wkqg
STEP: Creating a pod to test exec-volume-test
Sep 22 09:01:15.498: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-wkqg" in namespace "volume-6946" to be "Succeeded or Failed"
Sep 22 09:01:15.649: INFO: Pod "exec-volume-test-inlinevolume-wkqg": Phase="Pending", Reason="", readiness=false. Elapsed: 151.30075ms
Sep 22 09:01:17.796: INFO: Pod "exec-volume-test-inlinevolume-wkqg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298007391s
Sep 22 09:01:19.940: INFO: Pod "exec-volume-test-inlinevolume-wkqg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442639325s
Sep 22 09:01:22.091: INFO: Pod "exec-volume-test-inlinevolume-wkqg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.593041155s
STEP: Saw pod success
Sep 22 09:01:22.091: INFO: Pod "exec-volume-test-inlinevolume-wkqg" satisfied condition "Succeeded or Failed"
Sep 22 09:01:22.236: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod exec-volume-test-inlinevolume-wkqg container exec-container-inlinevolume-wkqg: <nil>
STEP: delete the pod
Sep 22 09:01:22.530: INFO: Waiting for pod exec-volume-test-inlinevolume-wkqg to disappear
Sep 22 09:01:22.673: INFO: Pod exec-volume-test-inlinevolume-wkqg no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-wkqg
Sep 22 09:01:22.673: INFO: Deleting pod "exec-volume-test-inlinevolume-wkqg" in namespace "volume-6946"
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:23.130: INFO: Driver local doesn't support ext4 -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 31 lines ...
• [SLOW TEST:7.068 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":6,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:25.310: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
Sep 22 09:00:59.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
Sep 22 09:01:00.646: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 22 09:01:00.939: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1340" in namespace "provisioning-1340" to be "Succeeded or Failed"
Sep 22 09:01:01.083: INFO: Pod "hostpath-symlink-prep-provisioning-1340": Phase="Pending", Reason="", readiness=false. Elapsed: 143.712003ms
Sep 22 09:01:03.227: INFO: Pod "hostpath-symlink-prep-provisioning-1340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287666219s
STEP: Saw pod success
Sep 22 09:01:03.227: INFO: Pod "hostpath-symlink-prep-provisioning-1340" satisfied condition "Succeeded or Failed"
Sep 22 09:01:03.227: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1340" in namespace "provisioning-1340"
Sep 22 09:01:03.382: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1340" to be fully deleted
Sep 22 09:01:03.526: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-l5c4
Sep 22 09:01:05.986: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-1340 exec pod-subpath-test-inlinevolume-l5c4 --container test-container-volume-inlinevolume-l5c4 -- /bin/sh -c rm -r /test-volume/provisioning-1340'
Sep 22 09:01:07.462: INFO: stderr: ""
Sep 22 09:01:07.462: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-l5c4
Sep 22 09:01:07.462: INFO: Deleting pod "pod-subpath-test-inlinevolume-l5c4" in namespace "provisioning-1340"
Sep 22 09:01:07.613: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-l5c4" to be fully deleted
STEP: Deleting pod
Sep 22 09:01:17.901: INFO: Deleting pod "pod-subpath-test-inlinevolume-l5c4" in namespace "provisioning-1340"
Sep 22 09:01:18.190: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1340" in namespace "provisioning-1340" to be "Succeeded or Failed"
Sep 22 09:01:18.333: INFO: Pod "hostpath-symlink-prep-provisioning-1340": Phase="Pending", Reason="", readiness=false. Elapsed: 143.551902ms
Sep 22 09:01:20.478: INFO: Pod "hostpath-symlink-prep-provisioning-1340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28842812s
Sep 22 09:01:22.623: INFO: Pod "hostpath-symlink-prep-provisioning-1340": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432897853s
Sep 22 09:01:24.767: INFO: Pod "hostpath-symlink-prep-provisioning-1340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577030453s
STEP: Saw pod success
Sep 22 09:01:24.767: INFO: Pod "hostpath-symlink-prep-provisioning-1340" satisfied condition "Succeeded or Failed"
Sep 22 09:01:24.767: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1340" in namespace "provisioning-1340"
Sep 22 09:01:24.914: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1340" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:25.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1340" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:25.356: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull from private registry with secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":3,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:27.720: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:28.677: INFO: Only supported for providers [openstack] (not aws)
... skipping 160 lines ...
• [SLOW TEST:13.037 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:481
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":4,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:31.463: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":17,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:35.378: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 28 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Sep 22 09:01:29.539: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 22 09:01:29.683: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tdbr
STEP: Creating a pod to test subpath
Sep 22 09:01:29.831: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-tdbr" in namespace "provisioning-6976" to be "Succeeded or Failed"
Sep 22 09:01:29.975: INFO: Pod "pod-subpath-test-inlinevolume-tdbr": Phase="Pending", Reason="", readiness=false. Elapsed: 143.863037ms
Sep 22 09:01:32.122: INFO: Pod "pod-subpath-test-inlinevolume-tdbr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290672618s
Sep 22 09:01:34.272: INFO: Pod "pod-subpath-test-inlinevolume-tdbr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.440707048s
STEP: Saw pod success
Sep 22 09:01:34.272: INFO: Pod "pod-subpath-test-inlinevolume-tdbr" satisfied condition "Succeeded or Failed"
Sep 22 09:01:34.416: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-tdbr container test-container-volume-inlinevolume-tdbr: <nil>
STEP: delete the pod
Sep 22 09:01:34.754: INFO: Waiting for pod pod-subpath-test-inlinevolume-tdbr to disappear
Sep 22 09:01:34.901: INFO: Pod pod-subpath-test-inlinevolume-tdbr no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-tdbr
Sep 22 09:01:34.901: INFO: Deleting pod "pod-subpath-test-inlinevolume-tdbr" in namespace "provisioning-6976"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":47,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:35.533: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 22 09:01:26.381: INFO: The status of Pod server-envvars-15bcaced-4f84-46f8-adc3-2dbb83343753 is Pending, waiting for it to be Running (with Ready = true)
Sep 22 09:01:28.525: INFO: The status of Pod server-envvars-15bcaced-4f84-46f8-adc3-2dbb83343753 is Running (Ready = true)
Sep 22 09:01:28.962: INFO: Waiting up to 5m0s for pod "client-envvars-e0b8d013-9fbc-4226-ba76-7900adc5d3bf" in namespace "pods-7029" to be "Succeeded or Failed"
Sep 22 09:01:29.106: INFO: Pod "client-envvars-e0b8d013-9fbc-4226-ba76-7900adc5d3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 143.722403ms
Sep 22 09:01:31.252: INFO: Pod "client-envvars-e0b8d013-9fbc-4226-ba76-7900adc5d3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289783283s
Sep 22 09:01:33.397: INFO: Pod "client-envvars-e0b8d013-9fbc-4226-ba76-7900adc5d3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434982029s
Sep 22 09:01:35.543: INFO: Pod "client-envvars-e0b8d013-9fbc-4226-ba76-7900adc5d3bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.580592312s
STEP: Saw pod success
Sep 22 09:01:35.543: INFO: Pod "client-envvars-e0b8d013-9fbc-4226-ba76-7900adc5d3bf" satisfied condition "Succeeded or Failed"
Sep 22 09:01:35.686: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod client-envvars-e0b8d013-9fbc-4226-ba76-7900adc5d3bf container env3cont: <nil>
STEP: delete the pod
Sep 22 09:01:35.982: INFO: Waiting for pod client-envvars-e0b8d013-9fbc-4226-ba76-7900adc5d3bf to disappear
Sep 22 09:01:36.126: INFO: Pod client-envvars-e0b8d013-9fbc-4226-ba76-7900adc5d3bf no longer exists
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.143 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":4,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:38.432: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:39.977: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 88 lines ...
Sep 22 09:01:30.057: INFO: PersistentVolumeClaim pvc-ff7mg found but phase is Pending instead of Bound.
Sep 22 09:01:32.202: INFO: PersistentVolumeClaim pvc-ff7mg found and phase=Bound (10.866330203s)
Sep 22 09:01:32.202: INFO: Waiting up to 3m0s for PersistentVolume local-scv4c to have phase Bound
Sep 22 09:01:32.346: INFO: PersistentVolume local-scv4c found and phase=Bound (143.693898ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-q57l
STEP: Creating a pod to test subpath
Sep 22 09:01:32.779: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q57l" in namespace "provisioning-5907" to be "Succeeded or Failed"
Sep 22 09:01:32.923: INFO: Pod "pod-subpath-test-preprovisionedpv-q57l": Phase="Pending", Reason="", readiness=false. Elapsed: 143.89376ms
Sep 22 09:01:35.069: INFO: Pod "pod-subpath-test-preprovisionedpv-q57l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290614307s
Sep 22 09:01:37.241: INFO: Pod "pod-subpath-test-preprovisionedpv-q57l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.462217455s
STEP: Saw pod success
Sep 22 09:01:37.241: INFO: Pod "pod-subpath-test-preprovisionedpv-q57l" satisfied condition "Succeeded or Failed"
Sep 22 09:01:37.463: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-q57l container test-container-subpath-preprovisionedpv-q57l: <nil>
STEP: delete the pod
Sep 22 09:01:37.903: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q57l to disappear
Sep 22 09:01:38.048: INFO: Pod pod-subpath-test-preprovisionedpv-q57l no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-q57l
Sep 22 09:01:38.048: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q57l" in namespace "provisioning-5907"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:40.063: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 90 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 22 09:01:36.258: INFO: Waiting up to 5m0s for pod "busybox-user-65534-f66d21ba-d7ae-4f90-b45b-c2162aff0606" in namespace "security-context-test-6677" to be "Succeeded or Failed"
Sep 22 09:01:36.491: INFO: Pod "busybox-user-65534-f66d21ba-d7ae-4f90-b45b-c2162aff0606": Phase="Pending", Reason="", readiness=false. Elapsed: 232.646522ms
Sep 22 09:01:38.635: INFO: Pod "busybox-user-65534-f66d21ba-d7ae-4f90-b45b-c2162aff0606": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376469002s
Sep 22 09:01:40.791: INFO: Pod "busybox-user-65534-f66d21ba-d7ae-4f90-b45b-c2162aff0606": Phase="Pending", Reason="", readiness=false. Elapsed: 4.532601485s
Sep 22 09:01:42.942: INFO: Pod "busybox-user-65534-f66d21ba-d7ae-4f90-b45b-c2162aff0606": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.683053179s
Sep 22 09:01:42.942: INFO: Pod "busybox-user-65534-f66d21ba-d7ae-4f90-b45b-c2162aff0606" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:42.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6677" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:01:43.265: INFO: >>> kubeConfig: /root/.kube/config
... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:01:45.232: INFO: Waiting up to 5m0s for pod "metadata-volume-ba36d402-adf9-41d1-bc82-4adb0f3d98ff" in namespace "downward-api-2423" to be "Succeeded or Failed"
Sep 22 09:01:45.375: INFO: Pod "metadata-volume-ba36d402-adf9-41d1-bc82-4adb0f3d98ff": Phase="Pending", Reason="", readiness=false. Elapsed: 143.645193ms
Sep 22 09:01:47.520: INFO: Pod "metadata-volume-ba36d402-adf9-41d1-bc82-4adb0f3d98ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288639284s
STEP: Saw pod success
Sep 22 09:01:47.520: INFO: Pod "metadata-volume-ba36d402-adf9-41d1-bc82-4adb0f3d98ff" satisfied condition "Succeeded or Failed"
Sep 22 09:01:47.664: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod metadata-volume-ba36d402-adf9-41d1-bc82-4adb0f3d98ff container client-container: <nil>
STEP: delete the pod
Sep 22 09:01:47.961: INFO: Waiting for pod metadata-volume-ba36d402-adf9-41d1-bc82-4adb0f3d98ff to disappear
Sep 22 09:01:48.105: INFO: Pod metadata-volume-ba36d402-adf9-41d1-bc82-4adb0f3d98ff no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:48.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2423" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:48.423: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 393 lines ...
Sep 22 09:00:59.266: INFO: PersistentVolumeClaim pvc-wbgzm found but phase is Pending instead of Bound.
Sep 22 09:01:01.412: INFO: PersistentVolumeClaim pvc-wbgzm found and phase=Bound (2.289775491s)
Sep 22 09:01:01.412: INFO: Waiting up to 3m0s for PersistentVolume aws-956pt to have phase Bound
Sep 22 09:01:01.556: INFO: PersistentVolume aws-956pt found and phase=Bound (144.123597ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-j8nm
STEP: Creating a pod to test exec-volume-test
Sep 22 09:01:01.997: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-j8nm" in namespace "volume-8022" to be "Succeeded or Failed"
Sep 22 09:01:02.140: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 143.358954ms
Sep 22 09:01:04.284: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287249176s
Sep 22 09:01:06.442: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445616071s
Sep 22 09:01:08.588: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.590722337s
Sep 22 09:01:10.732: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.734836858s
Sep 22 09:01:12.905: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.908193778s
... skipping 7 lines ...
Sep 22 09:01:30.064: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 28.066989065s
Sep 22 09:01:32.208: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 30.21157555s
Sep 22 09:01:34.353: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 32.356035146s
Sep 22 09:01:36.520: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Pending", Reason="", readiness=false. Elapsed: 34.523189042s
Sep 22 09:01:38.665: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.668253082s
STEP: Saw pod success
Sep 22 09:01:38.665: INFO: Pod "exec-volume-test-preprovisionedpv-j8nm" satisfied condition "Succeeded or Failed"
Sep 22 09:01:38.822: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-j8nm container exec-container-preprovisionedpv-j8nm: <nil>
STEP: delete the pod
Sep 22 09:01:39.128: INFO: Waiting for pod exec-volume-test-preprovisionedpv-j8nm to disappear
Sep 22 09:01:39.271: INFO: Pod exec-volume-test-preprovisionedpv-j8nm no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-j8nm
Sep 22 09:01:39.271: INFO: Deleting pod "exec-volume-test-preprovisionedpv-j8nm" in namespace "volume-8022"
STEP: Deleting pv and pvc
Sep 22 09:01:39.417: INFO: Deleting PersistentVolumeClaim "pvc-wbgzm"
Sep 22 09:01:39.574: INFO: Deleting PersistentVolume "aws-956pt"
Sep 22 09:01:40.023: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f7f3c3198c97adfd", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f7f3c3198c97adfd is currently attached to i-021f849308fc74b8f
	status code: 400, request id: e02ec44f-2dbb-41ec-b7b2-d59d3e945629
Sep 22 09:01:45.815: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f7f3c3198c97adfd", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f7f3c3198c97adfd is currently attached to i-021f849308fc74b8f
	status code: 400, request id: 23cdd691-04fe-4fc3-89a6-f06da7c5a402
Sep 22 09:01:51.596: INFO: Successfully deleted PD "aws://sa-east-1a/vol-0f7f3c3198c97adfd".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:51.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8022" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:51.911: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 127 lines ...
• [SLOW TEST:41.046 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:55.029: INFO: Only supported for providers [vsphere] (not aws)
... skipping 67 lines ...
• [SLOW TEST:30.542 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:318
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":7,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:55.907: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:01:55.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-3358" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:01:49.888: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Sep 22 09:01:50.610: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 22 09:01:50.610: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qfhf
STEP: Creating a pod to test subpath
Sep 22 09:01:50.755: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qfhf" in namespace "provisioning-5440" to be "Succeeded or Failed"
Sep 22 09:01:50.899: INFO: Pod "pod-subpath-test-inlinevolume-qfhf": Phase="Pending", Reason="", readiness=false. Elapsed: 143.566904ms
Sep 22 09:01:53.044: INFO: Pod "pod-subpath-test-inlinevolume-qfhf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288559952s
Sep 22 09:01:55.189: INFO: Pod "pod-subpath-test-inlinevolume-qfhf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433746467s
STEP: Saw pod success
Sep 22 09:01:55.189: INFO: Pod "pod-subpath-test-inlinevolume-qfhf" satisfied condition "Succeeded or Failed"
Sep 22 09:01:55.333: INFO: Trying to get logs from node ip-172-20-50-246.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-qfhf container test-container-volume-inlinevolume-qfhf: <nil>
STEP: delete the pod
Sep 22 09:01:55.633: INFO: Waiting for pod pod-subpath-test-inlinevolume-qfhf to disappear
Sep 22 09:01:55.777: INFO: Pod pod-subpath-test-inlinevolume-qfhf no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qfhf
Sep 22 09:01:55.777: INFO: Deleting pod "pod-subpath-test-inlinevolume-qfhf" in namespace "provisioning-5440"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:56.362: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 82 lines ...
STEP: Destroying namespace "services-5685" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:58.936: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 70 lines ...
• [SLOW TEST:23.695 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:01:59.258: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:00.799: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Sep 22 09:01:55.586: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 22 09:01:55.730: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-rjx9
STEP: Creating a pod to test subpath
Sep 22 09:01:55.876: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-rjx9" in namespace "provisioning-1246" to be "Succeeded or Failed"
Sep 22 09:01:56.020: INFO: Pod "pod-subpath-test-inlinevolume-rjx9": Phase="Pending", Reason="", readiness=false. Elapsed: 143.969516ms
Sep 22 09:01:58.165: INFO: Pod "pod-subpath-test-inlinevolume-rjx9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28873914s
Sep 22 09:02:00.313: INFO: Pod "pod-subpath-test-inlinevolume-rjx9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437085274s
STEP: Saw pod success
Sep 22 09:02:00.314: INFO: Pod "pod-subpath-test-inlinevolume-rjx9" satisfied condition "Succeeded or Failed"
Sep 22 09:02:00.457: INFO: Trying to get logs from node ip-172-20-50-246.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-rjx9 container test-container-volume-inlinevolume-rjx9: <nil>
STEP: delete the pod
Sep 22 09:02:00.756: INFO: Waiting for pod pod-subpath-test-inlinevolume-rjx9 to disappear
Sep 22 09:02:00.903: INFO: Pod pod-subpath-test-inlinevolume-rjx9 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-rjx9
Sep 22 09:02:00.903: INFO: Deleting pod "pod-subpath-test-inlinevolume-rjx9" in namespace "provisioning-1246"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:01.514: INFO: Only supported for providers [gce gke] (not aws)
... skipping 58 lines ...
• [SLOW TEST:23.755 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":5,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:02.225: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 94 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-8822/configmap-test-ae459968-0c57-4a86-8af1-a5d111677201
STEP: Creating a pod to test consume configMaps
Sep 22 09:02:00.000: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc2ca5e7-36f0-417a-8a60-ae41168505dc" in namespace "configmap-8822" to be "Succeeded or Failed"
Sep 22 09:02:00.144: INFO: Pod "pod-configmaps-fc2ca5e7-36f0-417a-8a60-ae41168505dc": Phase="Pending", Reason="", readiness=false. Elapsed: 143.539459ms
Sep 22 09:02:02.288: INFO: Pod "pod-configmaps-fc2ca5e7-36f0-417a-8a60-ae41168505dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.28766467s
STEP: Saw pod success
Sep 22 09:02:02.288: INFO: Pod "pod-configmaps-fc2ca5e7-36f0-417a-8a60-ae41168505dc" satisfied condition "Succeeded or Failed"
Sep 22 09:02:02.433: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-configmaps-fc2ca5e7-36f0-417a-8a60-ae41168505dc container env-test: <nil>
STEP: delete the pod
Sep 22 09:02:02.731: INFO: Waiting for pod pod-configmaps-fc2ca5e7-36f0-417a-8a60-ae41168505dc to disappear
Sep 22 09:02:02.875: INFO: Pod pod-configmaps-fc2ca5e7-36f0-417a-8a60-ae41168505dc no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:02:02.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8822" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:01:59.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-584c882b-6ca2-4194-a53f-3044646cd026
STEP: Creating a pod to test consume secrets
Sep 22 09:02:00.303: INFO: Waiting up to 5m0s for pod "pod-secrets-d4464a49-5b0a-4f86-b61a-d4adb549ae45" in namespace "secrets-357" to be "Succeeded or Failed"
Sep 22 09:02:00.447: INFO: Pod "pod-secrets-d4464a49-5b0a-4f86-b61a-d4adb549ae45": Phase="Pending", Reason="", readiness=false. Elapsed: 143.608788ms
Sep 22 09:02:02.593: INFO: Pod "pod-secrets-d4464a49-5b0a-4f86-b61a-d4adb549ae45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289624537s
Sep 22 09:02:04.737: INFO: Pod "pod-secrets-d4464a49-5b0a-4f86-b61a-d4adb549ae45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4342325s
Sep 22 09:02:06.895: INFO: Pod "pod-secrets-d4464a49-5b0a-4f86-b61a-d4adb549ae45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.59221914s
STEP: Saw pod success
Sep 22 09:02:06.895: INFO: Pod "pod-secrets-d4464a49-5b0a-4f86-b61a-d4adb549ae45" satisfied condition "Succeeded or Failed"
Sep 22 09:02:07.048: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-secrets-d4464a49-5b0a-4f86-b61a-d4adb549ae45 container secret-volume-test: <nil>
STEP: delete the pod
Sep 22 09:02:07.358: INFO: Waiting for pod pod-secrets-d4464a49-5b0a-4f86-b61a-d4adb549ae45 to disappear
Sep 22 09:02:07.502: INFO: Pod pod-secrets-d4464a49-5b0a-4f86-b61a-d4adb549ae45 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.507 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:07.817: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":55,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:08.627: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 168 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:08.786: INFO: Only supported for providers [azure] (not aws)
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:02:09.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2211" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":7,"skipped":77,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 194 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":3,"skipped":46,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:01:56.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:02:09.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b184ec2d-9cc4-46e4-89eb-a1295742e4c4" in namespace "downward-api-8414" to be "Succeeded or Failed"
Sep 22 09:02:09.806: INFO: Pod "downwardapi-volume-b184ec2d-9cc4-46e4-89eb-a1295742e4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 144.092397ms
Sep 22 09:02:11.951: INFO: Pod "downwardapi-volume-b184ec2d-9cc4-46e4-89eb-a1295742e4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28896807s
Sep 22 09:02:14.095: INFO: Pod "downwardapi-volume-b184ec2d-9cc4-46e4-89eb-a1295742e4c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433457276s
STEP: Saw pod success
Sep 22 09:02:14.095: INFO: Pod "downwardapi-volume-b184ec2d-9cc4-46e4-89eb-a1295742e4c4" satisfied condition "Succeeded or Failed"
Sep 22 09:02:14.239: INFO: Trying to get logs from node ip-172-20-50-246.sa-east-1.compute.internal pod downwardapi-volume-b184ec2d-9cc4-46e4-89eb-a1295742e4c4 container client-container: <nil>
STEP: delete the pod
Sep 22 09:02:14.540: INFO: Waiting for pod downwardapi-volume-b184ec2d-9cc4-46e4-89eb-a1295742e4c4 to disappear
Sep 22 09:02:14.684: INFO: Pod downwardapi-volume-b184ec2d-9cc4-46e4-89eb-a1295742e4c4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.181 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:14.998: INFO: Only supported for providers [azure] (not aws)
... skipping 46 lines ...
Sep 22 09:01:40.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 22 09:01:41.089: INFO: created pod
Sep 22 09:01:41.089: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-8178" to be "Succeeded or Failed"
Sep 22 09:01:41.243: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 153.798602ms
Sep 22 09:01:43.391: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301702877s
Sep 22 09:01:45.537: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.44723691s
STEP: Saw pod success
Sep 22 09:01:45.537: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Sep 22 09:02:15.537: INFO: polling logs
Sep 22 09:02:15.682: INFO: Pod logs: 
2021/09/22 09:01:42 OK: Got token
2021/09/22 09:01:42 validating with in-cluster discovery
2021/09/22 09:01:42 OK: got issuer https://api.internal.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io
2021/09/22 09:01:42 Full, not-validated claims: 
... skipping 14 lines ...
• [SLOW TEST:36.064 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:16.130: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
• [SLOW TEST:47.472 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:16.169: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":4,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:28.080 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":5,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
Sep 22 09:02:12.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 22 09:02:13.841: INFO: Waiting up to 5m0s for pod "pod-76d43507-84aa-4b0c-a699-9ae4dfde1300" in namespace "emptydir-8048" to be "Succeeded or Failed"
Sep 22 09:02:13.985: INFO: Pod "pod-76d43507-84aa-4b0c-a699-9ae4dfde1300": Phase="Pending", Reason="", readiness=false. Elapsed: 143.476901ms
Sep 22 09:02:16.129: INFO: Pod "pod-76d43507-84aa-4b0c-a699-9ae4dfde1300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287933349s
Sep 22 09:02:18.274: INFO: Pod "pod-76d43507-84aa-4b0c-a699-9ae4dfde1300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432521723s
STEP: Saw pod success
Sep 22 09:02:18.274: INFO: Pod "pod-76d43507-84aa-4b0c-a699-9ae4dfde1300" satisfied condition "Succeeded or Failed"
Sep 22 09:02:18.420: INFO: Trying to get logs from node ip-172-20-50-246.sa-east-1.compute.internal pod pod-76d43507-84aa-4b0c-a699-9ae4dfde1300 container test-container: <nil>
STEP: delete the pod
Sep 22 09:02:18.713: INFO: Waiting for pod pod-76d43507-84aa-4b0c-a699-9ae4dfde1300 to disappear
Sep 22 09:02:18.857: INFO: Pod pod-76d43507-84aa-4b0c-a699-9ae4dfde1300 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.169 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:12.560 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":8,"skipped":82,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:22.202: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:02:17.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bfac609-3d8d-4522-97bd-641dcdded300" in namespace "downward-api-4684" to be "Succeeded or Failed"
Sep 22 09:02:17.169: INFO: Pod "downwardapi-volume-7bfac609-3d8d-4522-97bd-641dcdded300": Phase="Pending", Reason="", readiness=false. Elapsed: 148.112011ms
Sep 22 09:02:19.323: INFO: Pod "downwardapi-volume-7bfac609-3d8d-4522-97bd-641dcdded300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302943505s
Sep 22 09:02:21.468: INFO: Pod "downwardapi-volume-7bfac609-3d8d-4522-97bd-641dcdded300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.447436565s
STEP: Saw pod success
Sep 22 09:02:21.468: INFO: Pod "downwardapi-volume-7bfac609-3d8d-4522-97bd-641dcdded300" satisfied condition "Succeeded or Failed"
Sep 22 09:02:21.611: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod downwardapi-volume-7bfac609-3d8d-4522-97bd-641dcdded300 container client-container: <nil>
STEP: delete the pod
Sep 22 09:02:21.912: INFO: Waiting for pod downwardapi-volume-7bfac609-3d8d-4522-97bd-641dcdded300 to disappear
Sep 22 09:02:22.055: INFO: Pod downwardapi-volume-7bfac609-3d8d-4522-97bd-641dcdded300 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.193 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:22.364: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 33 lines ...
STEP: Destroying namespace "apply-5225" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":9,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:24.690: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 87 lines ...
Sep 22 09:01:39.338: INFO: PersistentVolumeClaim csi-hostpathwbrqs found but phase is Pending instead of Bound.
Sep 22 09:01:41.487: INFO: PersistentVolumeClaim csi-hostpathwbrqs found but phase is Pending instead of Bound.
Sep 22 09:01:43.669: INFO: PersistentVolumeClaim csi-hostpathwbrqs found but phase is Pending instead of Bound.
Sep 22 09:01:45.813: INFO: PersistentVolumeClaim csi-hostpathwbrqs found and phase=Bound (38.822842517s)
STEP: Creating pod pod-subpath-test-dynamicpv-rkt7
STEP: Creating a pod to test subpath
Sep 22 09:01:46.245: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rkt7" in namespace "provisioning-4171" to be "Succeeded or Failed"
Sep 22 09:01:46.389: INFO: Pod "pod-subpath-test-dynamicpv-rkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.407015ms
Sep 22 09:01:48.533: INFO: Pod "pod-subpath-test-dynamicpv-rkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287866246s
Sep 22 09:01:50.677: INFO: Pod "pod-subpath-test-dynamicpv-rkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432069257s
Sep 22 09:01:52.826: INFO: Pod "pod-subpath-test-dynamicpv-rkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58052839s
Sep 22 09:01:54.994: INFO: Pod "pod-subpath-test-dynamicpv-rkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748414218s
Sep 22 09:01:57.142: INFO: Pod "pod-subpath-test-dynamicpv-rkt7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.896456257s
STEP: Saw pod success
Sep 22 09:01:57.142: INFO: Pod "pod-subpath-test-dynamicpv-rkt7" satisfied condition "Succeeded or Failed"
Sep 22 09:01:57.285: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-rkt7 container test-container-volume-dynamicpv-rkt7: <nil>
STEP: delete the pod
Sep 22 09:01:57.677: INFO: Waiting for pod pod-subpath-test-dynamicpv-rkt7 to disappear
Sep 22 09:01:57.821: INFO: Pod pod-subpath-test-dynamicpv-rkt7 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rkt7
Sep 22 09:01:57.821: INFO: Deleting pod "pod-subpath-test-dynamicpv-rkt7" in namespace "provisioning-4171"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:02:22.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-da38b4cf-aa49-42b5-916b-e68c1f831d81
STEP: Creating a pod to test consume configMaps
Sep 22 09:02:23.394: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b35d0923-9c7d-47ce-8888-d3de6f7bc142" in namespace "projected-315" to be "Succeeded or Failed"
Sep 22 09:02:23.537: INFO: Pod "pod-projected-configmaps-b35d0923-9c7d-47ce-8888-d3de6f7bc142": Phase="Pending", Reason="", readiness=false. Elapsed: 143.485419ms
Sep 22 09:02:25.681: INFO: Pod "pod-projected-configmaps-b35d0923-9c7d-47ce-8888-d3de6f7bc142": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287224128s
STEP: Saw pod success
Sep 22 09:02:25.681: INFO: Pod "pod-projected-configmaps-b35d0923-9c7d-47ce-8888-d3de6f7bc142" satisfied condition "Succeeded or Failed"
Sep 22 09:02:25.824: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-projected-configmaps-b35d0923-9c7d-47ce-8888-d3de6f7bc142 container agnhost-container: <nil>
STEP: delete the pod
Sep 22 09:02:26.127: INFO: Waiting for pod pod-projected-configmaps-b35d0923-9c7d-47ce-8888-d3de6f7bc142 to disappear
Sep 22 09:02:26.270: INFO: Pod pod-projected-configmaps-b35d0923-9c7d-47ce-8888-d3de6f7bc142 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:02:26.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-315" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:26.568: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 63 lines ...
• [SLOW TEST:89.200 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:27.540: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 207 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:42.036: INFO: Only supported for providers [azure] (not aws)
... skipping 41 lines ...
Sep 22 09:02:30.307: INFO: PersistentVolumeClaim pvc-xp724 found but phase is Pending instead of Bound.
Sep 22 09:02:32.451: INFO: PersistentVolumeClaim pvc-xp724 found and phase=Bound (8.719993494s)
Sep 22 09:02:32.451: INFO: Waiting up to 3m0s for PersistentVolume local-6zccf to have phase Bound
Sep 22 09:02:32.594: INFO: PersistentVolume local-6zccf found and phase=Bound (143.409314ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zvkb
STEP: Creating a pod to test subpath
Sep 22 09:02:33.025: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zvkb" in namespace "provisioning-8432" to be "Succeeded or Failed"
Sep 22 09:02:33.168: INFO: Pod "pod-subpath-test-preprovisionedpv-zvkb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.274718ms
Sep 22 09:02:35.313: INFO: Pod "pod-subpath-test-preprovisionedpv-zvkb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287587799s
Sep 22 09:02:37.456: INFO: Pod "pod-subpath-test-preprovisionedpv-zvkb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431347469s
Sep 22 09:02:39.604: INFO: Pod "pod-subpath-test-preprovisionedpv-zvkb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579331063s
STEP: Saw pod success
Sep 22 09:02:39.604: INFO: Pod "pod-subpath-test-preprovisionedpv-zvkb" satisfied condition "Succeeded or Failed"
Sep 22 09:02:39.747: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-zvkb container test-container-subpath-preprovisionedpv-zvkb: <nil>
STEP: delete the pod
Sep 22 09:02:40.044: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zvkb to disappear
Sep 22 09:02:40.191: INFO: Pod pod-subpath-test-preprovisionedpv-zvkb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zvkb
Sep 22 09:02:40.191: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zvkb" in namespace "provisioning-8432"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
Sep 22 09:02:29.459: INFO: PersistentVolumeClaim pvc-zpv9t found but phase is Pending instead of Bound.
Sep 22 09:02:31.605: INFO: PersistentVolumeClaim pvc-zpv9t found and phase=Bound (10.867254552s)
Sep 22 09:02:31.605: INFO: Waiting up to 3m0s for PersistentVolume local-xpftv to have phase Bound
Sep 22 09:02:31.749: INFO: PersistentVolume local-xpftv found and phase=Bound (144.329614ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-q84g
STEP: Creating a pod to test subpath
Sep 22 09:02:32.190: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q84g" in namespace "provisioning-4379" to be "Succeeded or Failed"
Sep 22 09:02:32.334: INFO: Pod "pod-subpath-test-preprovisionedpv-q84g": Phase="Pending", Reason="", readiness=false. Elapsed: 143.733313ms
Sep 22 09:02:34.478: INFO: Pod "pod-subpath-test-preprovisionedpv-q84g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287985673s
Sep 22 09:02:36.622: INFO: Pod "pod-subpath-test-preprovisionedpv-q84g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43236379s
Sep 22 09:02:38.767: INFO: Pod "pod-subpath-test-preprovisionedpv-q84g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577329065s
STEP: Saw pod success
Sep 22 09:02:38.767: INFO: Pod "pod-subpath-test-preprovisionedpv-q84g" satisfied condition "Succeeded or Failed"
Sep 22 09:02:38.911: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-q84g container test-container-subpath-preprovisionedpv-q84g: <nil>
STEP: delete the pod
Sep 22 09:02:39.206: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q84g to disappear
Sep 22 09:02:39.350: INFO: Pod pod-subpath-test-preprovisionedpv-q84g no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-q84g
Sep 22 09:02:39.350: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q84g" in namespace "provisioning-4379"
STEP: Creating pod pod-subpath-test-preprovisionedpv-q84g
STEP: Creating a pod to test subpath
Sep 22 09:02:39.639: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q84g" in namespace "provisioning-4379" to be "Succeeded or Failed"
Sep 22 09:02:39.783: INFO: Pod "pod-subpath-test-preprovisionedpv-q84g": Phase="Pending", Reason="", readiness=false. Elapsed: 143.58846ms
Sep 22 09:02:41.927: INFO: Pod "pod-subpath-test-preprovisionedpv-q84g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287534278s
STEP: Saw pod success
Sep 22 09:02:41.927: INFO: Pod "pod-subpath-test-preprovisionedpv-q84g" satisfied condition "Succeeded or Failed"
Sep 22 09:02:42.075: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-q84g container test-container-subpath-preprovisionedpv-q84g: <nil>
STEP: delete the pod
Sep 22 09:02:42.372: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q84g to disappear
Sep 22 09:02:42.515: INFO: Pod pod-subpath-test-preprovisionedpv-q84g no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-q84g
Sep 22 09:02:42.516: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q84g" in namespace "provisioning-4379"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:44.498: INFO: Only supported for providers [gce gke] (not aws)
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:02:46.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-4118" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:46.397: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 170 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:46.575: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 221 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":6,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:48.272: INFO: Only supported for providers [openstack] (not aws)
... skipping 95 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:02:48.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1541" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":5,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:48.449: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 85 lines ...
Sep 22 09:02:05.429: INFO: PersistentVolumeClaim csi-hostpathcr4cn found but phase is Pending instead of Bound.
Sep 22 09:02:07.573: INFO: PersistentVolumeClaim csi-hostpathcr4cn found but phase is Pending instead of Bound.
Sep 22 09:02:09.718: INFO: PersistentVolumeClaim csi-hostpathcr4cn found but phase is Pending instead of Bound.
Sep 22 09:02:11.865: INFO: PersistentVolumeClaim csi-hostpathcr4cn found and phase=Bound (28.025321002s)
STEP: Creating pod pod-subpath-test-dynamicpv-btpl
STEP: Creating a pod to test subpath
Sep 22 09:02:12.312: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-btpl" in namespace "provisioning-7325" to be "Succeeded or Failed"
Sep 22 09:02:12.456: INFO: Pod "pod-subpath-test-dynamicpv-btpl": Phase="Pending", Reason="", readiness=false. Elapsed: 143.61892ms
Sep 22 09:02:14.606: INFO: Pod "pod-subpath-test-dynamicpv-btpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293810867s
Sep 22 09:02:16.750: INFO: Pod "pod-subpath-test-dynamicpv-btpl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438030834s
Sep 22 09:02:18.895: INFO: Pod "pod-subpath-test-dynamicpv-btpl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582354195s
Sep 22 09:02:21.039: INFO: Pod "pod-subpath-test-dynamicpv-btpl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726190027s
Sep 22 09:02:23.183: INFO: Pod "pod-subpath-test-dynamicpv-btpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.871056654s
STEP: Saw pod success
Sep 22 09:02:23.184: INFO: Pod "pod-subpath-test-dynamicpv-btpl" satisfied condition "Succeeded or Failed"
Sep 22 09:02:23.335: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-btpl container test-container-subpath-dynamicpv-btpl: <nil>
STEP: delete the pod
Sep 22 09:02:23.637: INFO: Waiting for pod pod-subpath-test-dynamicpv-btpl to disappear
Sep 22 09:02:23.781: INFO: Pod pod-subpath-test-dynamicpv-btpl no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-btpl
Sep 22 09:02:23.781: INFO: Deleting pod "pod-subpath-test-dynamicpv-btpl" in namespace "provisioning-7325"
STEP: Creating pod pod-subpath-test-dynamicpv-btpl
STEP: Creating a pod to test subpath
Sep 22 09:02:24.070: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-btpl" in namespace "provisioning-7325" to be "Succeeded or Failed"
Sep 22 09:02:24.214: INFO: Pod "pod-subpath-test-dynamicpv-btpl": Phase="Pending", Reason="", readiness=false. Elapsed: 143.668352ms
Sep 22 09:02:26.358: INFO: Pod "pod-subpath-test-dynamicpv-btpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28782075s
Sep 22 09:02:28.503: INFO: Pod "pod-subpath-test-dynamicpv-btpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432560751s
STEP: Saw pod success
Sep 22 09:02:28.503: INFO: Pod "pod-subpath-test-dynamicpv-btpl" satisfied condition "Succeeded or Failed"
Sep 22 09:02:28.647: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-btpl container test-container-subpath-dynamicpv-btpl: <nil>
STEP: delete the pod
Sep 22 09:02:28.949: INFO: Waiting for pod pod-subpath-test-dynamicpv-btpl to disappear
Sep 22 09:02:29.093: INFO: Pod pod-subpath-test-dynamicpv-btpl no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-btpl
Sep 22 09:02:29.093: INFO: Deleting pod "pod-subpath-test-dynamicpv-btpl" in namespace "provisioning-7325"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":50,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":19,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:02:36.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:54.440: INFO: Only supported for providers [openstack] (not aws)
... skipping 85 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":7,"skipped":45,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:02:54.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-4039" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":5,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:11.317 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":6,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:57.765: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 21 lines ...
Sep 22 09:02:55.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Sep 22 09:02:55.914: INFO: Waiting up to 5m0s for pod "client-containers-6e3acccf-9a20-49b2-be09-2370f3bf587f" in namespace "containers-4301" to be "Succeeded or Failed"
Sep 22 09:02:56.057: INFO: Pod "client-containers-6e3acccf-9a20-49b2-be09-2370f3bf587f": Phase="Pending", Reason="", readiness=false. Elapsed: 143.373576ms
Sep 22 09:02:58.201: INFO: Pod "client-containers-6e3acccf-9a20-49b2-be09-2370f3bf587f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287255511s
STEP: Saw pod success
Sep 22 09:02:58.201: INFO: Pod "client-containers-6e3acccf-9a20-49b2-be09-2370f3bf587f" satisfied condition "Succeeded or Failed"
Sep 22 09:02:58.345: INFO: Trying to get logs from node ip-172-20-50-246.sa-east-1.compute.internal pod client-containers-6e3acccf-9a20-49b2-be09-2370f3bf587f container agnhost-container: <nil>
STEP: delete the pod
Sep 22 09:02:58.638: INFO: Waiting for pod client-containers-6e3acccf-9a20-49b2-be09-2370f3bf587f to disappear
Sep 22 09:02:58.782: INFO: Pod client-containers-6e3acccf-9a20-49b2-be09-2370f3bf587f no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:02:58.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4301" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:02:59.090: INFO: Only supported for providers [vsphere] (not aws)
... skipping 52 lines ...
STEP: creating execpod-noendpoints on node ip-172-20-41-3.sa-east-1.compute.internal
Sep 22 09:02:55.492: INFO: Creating new exec pod
Sep 22 09:02:57.931: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node ip-172-20-41-3.sa-east-1.compute.internal
Sep 22 09:02:57.931: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5609 exec execpod-noendpointsrlpbj -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Sep 22 09:03:00.425: INFO: rc: 1
Sep 22 09:03:00.425: INFO: error contained 'REFUSED', as expected: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5609 exec execpod-noendpointsrlpbj -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:03:00.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5609" for this suite.
[AfterEach] [sig-network] Services
... skipping 3 lines ...
• [SLOW TEST:6.236 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":3,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:00.729: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:02:11.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
Sep 22 09:02:23.303: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6css4] to have phase Bound
Sep 22 09:02:23.448: INFO: PersistentVolumeClaim pvc-6css4 found and phase=Bound (144.782145ms)
STEP: Deleting the previously created pod
Sep 22 09:02:38.177: INFO: Deleting pod "pvc-volume-tester-9mmfw" in namespace "csi-mock-volumes-9722"
Sep 22 09:02:38.322: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9mmfw" to be fully deleted
STEP: Checking CSI driver logs
Sep 22 09:02:40.760: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/876e10a4-e2ae-4ded-9af0-24df86454989/volumes/kubernetes.io~csi/pvc-8b77ccd1-99e0-4271-bab5-4beb8765a8c5/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-9mmfw
Sep 22 09:02:40.760: INFO: Deleting pod "pvc-volume-tester-9mmfw" in namespace "csi-mock-volumes-9722"
STEP: Deleting claim pvc-6css4
Sep 22 09:02:41.201: INFO: Waiting up to 2m0s for PersistentVolume pvc-8b77ccd1-99e0-4271-bab5-4beb8765a8c5 to get deleted
Sep 22 09:02:41.345: INFO: PersistentVolume pvc-8b77ccd1-99e0-4271-bab5-4beb8765a8c5 found and phase=Released (143.487644ms)
Sep 22 09:02:43.489: INFO: PersistentVolume pvc-8b77ccd1-99e0-4271-bab5-4beb8765a8c5 was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":3,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:01.057: INFO: Only supported for providers [vsphere] (not aws)
... skipping 70 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-b3331e29-454f-44ab-a755-645a720f28ba
STEP: Creating a pod to test consume configMaps
Sep 22 09:02:49.468: INFO: Waiting up to 5m0s for pod "pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a" in namespace "configmap-1533" to be "Succeeded or Failed"
Sep 22 09:02:49.612: INFO: Pod "pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 143.662179ms
Sep 22 09:02:51.755: INFO: Pod "pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287511952s
Sep 22 09:02:53.900: INFO: Pod "pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432476313s
Sep 22 09:02:56.045: INFO: Pod "pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577412532s
Sep 22 09:02:58.189: INFO: Pod "pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72135431s
Sep 22 09:03:00.334: INFO: Pod "pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.866112165s
STEP: Saw pod success
Sep 22 09:03:00.334: INFO: Pod "pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a" satisfied condition "Succeeded or Failed"
Sep 22 09:03:00.478: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a container agnhost-container: <nil>
STEP: delete the pod
Sep 22 09:03:00.769: INFO: Waiting for pod pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a to disappear
Sep 22 09:03:00.913: INFO: Pod pod-configmaps-50cb3451-5191-490c-9e43-82a7a9c11b5a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.747 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:01.222: INFO: Only supported for providers [openstack] (not aws)
... skipping 14 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":8,"skipped":59,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:02:44.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
• [SLOW TEST:19.084 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":9,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 22 09:02:58.641: INFO: Waiting up to 5m0s for pod "pod-31a2b18a-792e-4c34-ae50-0ee854661639" in namespace "emptydir-4423" to be "Succeeded or Failed"
Sep 22 09:02:58.785: INFO: Pod "pod-31a2b18a-792e-4c34-ae50-0ee854661639": Phase="Pending", Reason="", readiness=false. Elapsed: 143.197829ms
Sep 22 09:03:00.928: INFO: Pod "pod-31a2b18a-792e-4c34-ae50-0ee854661639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286860589s
Sep 22 09:03:03.072: INFO: Pod "pod-31a2b18a-792e-4c34-ae50-0ee854661639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430645575s
STEP: Saw pod success
Sep 22 09:03:03.072: INFO: Pod "pod-31a2b18a-792e-4c34-ae50-0ee854661639" satisfied condition "Succeeded or Failed"
Sep 22 09:03:03.216: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-31a2b18a-792e-4c34-ae50-0ee854661639 container test-container: <nil>
STEP: delete the pod
Sep 22 09:03:03.554: INFO: Waiting for pod pod-31a2b18a-792e-4c34-ae50-0ee854661639 to disappear
Sep 22 09:03:03.697: INFO: Pod pod-31a2b18a-792e-4c34-ae50-0ee854661639 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":7,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:04.001: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 162 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:470
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":7,"skipped":63,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:06.039: INFO: Only supported for providers [azure] (not aws)
... skipping 167 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, have capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":2,"skipped":20,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Sep 22 09:03:02.098: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3264" to be "Succeeded or Failed"
Sep 22 09:03:02.242: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 143.612769ms
Sep 22 09:03:04.387: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288486032s
Sep 22 09:03:06.531: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432512564s
STEP: Saw pod success
Sep 22 09:03:06.531: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 22 09:03:06.675: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Sep 22 09:03:06.969: INFO: Waiting for pod pod-host-path-test to disappear
Sep 22 09:03:07.112: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 61 lines ...
Sep 22 09:02:58.755: INFO: PersistentVolumeClaim pvc-w4cns found but phase is Pending instead of Bound.
Sep 22 09:03:00.898: INFO: PersistentVolumeClaim pvc-w4cns found and phase=Bound (8.719022747s)
Sep 22 09:03:00.898: INFO: Waiting up to 3m0s for PersistentVolume local-p2887 to have phase Bound
Sep 22 09:03:01.041: INFO: PersistentVolume local-p2887 found and phase=Bound (143.045989ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-rgjc
STEP: Creating a pod to test exec-volume-test
Sep 22 09:03:01.482: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-rgjc" in namespace "volume-1135" to be "Succeeded or Failed"
Sep 22 09:03:01.625: INFO: Pod "exec-volume-test-preprovisionedpv-rgjc": Phase="Pending", Reason="", readiness=false. Elapsed: 143.378868ms
Sep 22 09:03:03.769: INFO: Pod "exec-volume-test-preprovisionedpv-rgjc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287118625s
Sep 22 09:03:05.912: INFO: Pod "exec-volume-test-preprovisionedpv-rgjc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430412421s
STEP: Saw pod success
Sep 22 09:03:05.912: INFO: Pod "exec-volume-test-preprovisionedpv-rgjc" satisfied condition "Succeeded or Failed"
Sep 22 09:03:06.056: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-rgjc container exec-container-preprovisionedpv-rgjc: <nil>
STEP: delete the pod
Sep 22 09:03:06.347: INFO: Waiting for pod exec-volume-test-preprovisionedpv-rgjc to disappear
Sep 22 09:03:06.491: INFO: Pod exec-volume-test-preprovisionedpv-rgjc no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-rgjc
Sep 22 09:03:06.491: INFO: Deleting pod "exec-volume-test-preprovisionedpv-rgjc" in namespace "volume-1135"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:10.259: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 42 lines ...
Sep 22 09:02:59.458: INFO: PersistentVolumeClaim pvc-5xw5g found but phase is Pending instead of Bound.
Sep 22 09:03:01.603: INFO: PersistentVolumeClaim pvc-5xw5g found and phase=Bound (10.863067717s)
Sep 22 09:03:01.603: INFO: Waiting up to 3m0s for PersistentVolume local-shx4b to have phase Bound
Sep 22 09:03:01.746: INFO: PersistentVolume local-shx4b found and phase=Bound (143.074293ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dpwp
STEP: Creating a pod to test subpath
Sep 22 09:03:02.180: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dpwp" in namespace "provisioning-9996" to be "Succeeded or Failed"
Sep 22 09:03:02.327: INFO: Pod "pod-subpath-test-preprovisionedpv-dpwp": Phase="Pending", Reason="", readiness=false. Elapsed: 146.868944ms
Sep 22 09:03:04.472: INFO: Pod "pod-subpath-test-preprovisionedpv-dpwp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291793235s
Sep 22 09:03:06.616: INFO: Pod "pod-subpath-test-preprovisionedpv-dpwp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436463073s
Sep 22 09:03:08.762: INFO: Pod "pod-subpath-test-preprovisionedpv-dpwp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.581479543s
STEP: Saw pod success
Sep 22 09:03:08.762: INFO: Pod "pod-subpath-test-preprovisionedpv-dpwp" satisfied condition "Succeeded or Failed"
Sep 22 09:03:08.905: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-dpwp container test-container-subpath-preprovisionedpv-dpwp: <nil>
STEP: delete the pod
Sep 22 09:03:09.198: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dpwp to disappear
Sep 22 09:03:09.341: INFO: Pod pod-subpath-test-preprovisionedpv-dpwp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dpwp
Sep 22 09:03:09.341: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dpwp" in namespace "provisioning-9996"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:11.319: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:03:12.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7341" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":3,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:12.477: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:00:59.648: INFO: >>> kubeConfig: /root/.kube/config
... skipping 174 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Sep 22 09:03:12.196: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-9322" to be "Succeeded or Failed"
Sep 22 09:03:12.340: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 143.991412ms
Sep 22 09:03:14.484: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288054168s
Sep 22 09:03:16.628: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.431640519s
Sep 22 09:03:16.628: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:03:16.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9322" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":8,"skipped":53,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:17.103: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 68 lines ...
Sep 22 09:00:49.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-86bd74f566\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 22 09:00:51.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-86bd74f566\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 22 09:00:53.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898052, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767898049, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-86bd74f566\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 22 09:00:56.017: INFO: Waiting up to 2m0s to get response from 100.65.10.149:8080
Sep 22 09:00:56.017: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip'
Sep 22 09:01:27.531: INFO: rc: 28
Sep 22 09:01:27.531: INFO: got err: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Sep 22 09:01:29.532: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip'
Sep 22 09:02:01.058: INFO: rc: 28
Sep 22 09:02:01.058: INFO: got err: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Sep 22 09:02:03.059: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip'
Sep 22 09:02:34.532: INFO: rc: 28
Sep 22 09:02:34.533: INFO: got err: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Sep 22 09:02:36.533: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip'
Sep 22 09:03:08.017: INFO: rc: 28
Sep 22 09:03:08.018: INFO: got err: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Sep 22 09:03:10.019: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
        },
        Code: 28,
    }
    error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip:
    Command stdout:
    
    stderr:
    + curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip
    command terminated with exit code 28
    
    error:
    exit status 28
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.execSourceIPTest(0x0, 0x0, 0x0, 0x0, 0xc00388ad80, 0x1a, 0xc0032d2e10, 0x15, 0xc0033b7a00, 0xd, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133 +0x4d9
... skipping 251 lines ...
• Failure [157.209 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903

  Sep 22 09:03:10.019: Unexpected error:
      <exec.CodeExitError>: {
          Err: {
              s: "error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
          },
          Code: 28,
      }
      error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5271 exec pause-pod-86bd74f566-6zz88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip:
      Command stdout:
      
      stderr:
      + curl -q -s --connect-timeout 30 100.65.10.149:8080/clientip
      command terminated with exit code 28
      
      error:
      exit status 28
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133
------------------------------
{"msg":"FAILED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":1,"skipped":21,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:13.037 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":8,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:03:19.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9607" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":2,"skipped":27,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":8,"skipped":59,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
• [SLOW TEST:6.830 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":28,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:26.928: INFO: Only supported for providers [azure] (not aws)
... skipping 81 lines ...
      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:01:10.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 35 lines ...
Sep 22 09:01:15.841: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3670
Sep 22 09:01:16.017: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3670
Sep 22 09:01:16.162: INFO: creating *v1.StatefulSet: csi-mock-volumes-3670-5386/csi-mockplugin
Sep 22 09:01:16.307: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3670
Sep 22 09:01:16.452: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3670"
Sep 22 09:01:16.595: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3670 to register on node ip-172-20-50-246.sa-east-1.compute.internal
I0922 09:01:42.950219    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3670","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0922 09:01:43.737561    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0922 09:01:43.887647    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3670","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0922 09:01:44.032773    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0922 09:01:44.176982    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0922 09:01:44.467678    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3670"},"Error":"","FullError":null}
STEP: Creating pod
Sep 22 09:01:59.475: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0922 09:01:59.802148    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0922 09:02:00.954092    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62"}}},"Error":"","FullError":null}
I0922 09:02:01.997074    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep 22 09:02:02.148: INFO: >>> kubeConfig: /root/.kube/config
I0922 09:02:03.155643    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62","storage.kubernetes.io/csiProvisionerIdentity":"1632301304250-8081-csi-mock-csi-mock-volumes-3670"}},"Response":{},"Error":"","FullError":null}
I0922 09:02:03.826959    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep 22 09:02:03.974: INFO: >>> kubeConfig: /root/.kube/config
Sep 22 09:02:04.916: INFO: >>> kubeConfig: /root/.kube/config
Sep 22 09:02:05.943: INFO: >>> kubeConfig: /root/.kube/config
I0922 09:02:06.923460    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62/globalmount","target_path":"/var/lib/kubelet/pods/90174d2d-4ae8-4194-a587-fb2abf98b36e/volumes/kubernetes.io~csi/pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62","storage.kubernetes.io/csiProvisionerIdentity":"1632301304250-8081-csi-mock-csi-mock-volumes-3670"}},"Response":{},"Error":"","FullError":null}
Sep 22 09:02:10.057: INFO: Deleting pod "pvc-volume-tester-qhshc" in namespace "csi-mock-volumes-3670"
Sep 22 09:02:10.203: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qhshc" to be fully deleted
Sep 22 09:02:11.712: INFO: >>> kubeConfig: /root/.kube/config
I0922 09:02:12.660978    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/90174d2d-4ae8-4194-a587-fb2abf98b36e/volumes/kubernetes.io~csi/pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62/mount"},"Response":{},"Error":"","FullError":null}
I0922 09:02:12.817219    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0922 09:02:12.963868    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62/globalmount"},"Response":{},"Error":"","FullError":null}
I0922 09:02:22.662296    5261 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Sep 22 09:02:23.644: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-xztlf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3670", SelfLink:"", UID:"144b0dd4-7070-4aca-9d7f-50d8835a5b62", ResourceVersion:"4991", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898119, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002561080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002561098)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0026ac5d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0026ac5e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 22 09:02:23.644: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-xztlf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3670", SelfLink:"", UID:"144b0dd4-7070-4aca-9d7f-50d8835a5b62", ResourceVersion:"4995", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898119, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-50-246.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c6e000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c6e018)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c6e030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c6e048)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001e93d80), VolumeMode:(*v1.PersistentVolumeMode)(0xc001e93d90), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 22 09:02:23.645: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-xztlf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3670", SelfLink:"", UID:"144b0dd4-7070-4aca-9d7f-50d8835a5b62", ResourceVersion:"4996", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898119, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3670", "volume.kubernetes.io/selected-node":"ip-172-20-50-246.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032f3b90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032f3ba8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032f3bc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032f3bd8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032f3bf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032f3c08)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003325400), VolumeMode:(*v1.PersistentVolumeMode)(0xc003325410), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 22 09:02:23.645: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-xztlf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3670", SelfLink:"", UID:"144b0dd4-7070-4aca-9d7f-50d8835a5b62", ResourceVersion:"5061", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898119, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3670", "volume.kubernetes.io/selected-node":"ip-172-20-50-246.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032f3c38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032f3c50)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032f3c68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032f3c80)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032f3c98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032f3cb0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62", StorageClassName:(*string)(0xc003325440), VolumeMode:(*v1.PersistentVolumeMode)(0xc003325450), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 22 09:02:23.645: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-xztlf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3670", SelfLink:"", UID:"144b0dd4-7070-4aca-9d7f-50d8835a5b62", ResourceVersion:"5062", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898119, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3670", "volume.kubernetes.io/selected-node":"ip-172-20-50-246.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032f3ce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032f3cf8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032f3d10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032f3d28)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032f3d40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032f3d58)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-144b0dd4-7070-4aca-9d7f-50d8835a5b62", StorageClassName:(*string)(0xc003325480), VolumeMode:(*v1.PersistentVolumeMode)(0xc003325490), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":4,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:03:27.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1674" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":4,"skipped":42,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:28.075: INFO: Only supported for providers [azure] (not aws)
... skipping 42 lines ...
• [SLOW TEST:6.096 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":67,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":6,"skipped":35,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:03:16.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:03:31.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8563" for this suite.


• [SLOW TEST:15.513 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":7,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:31.612: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 216 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":61,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
STEP: Destroying namespace "services-815" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":8,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:31.279 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:10.231 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:41.963: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 59 lines ...
• [SLOW TEST:92.340 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 78 lines ...
Sep 22 09:02:12.939: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-1408qhhfw      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1408    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1408qhhfw,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1408    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1408qhhfw,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1408    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1408qhhfw,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-1408qhhfw    f35c26d8-4d3d-4bb1-b9de-06adde13bb53 5592 0 2021-09-22 09:02:13 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-09-22 09:02:13 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-86bkv pvc- provisioning-1408  e54bfd71-eb12-4077-8648-9eb7513b76e2 5602 0 2021-09-22 09:02:13 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-09-22 09:02:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1408qhhfw,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-4327935c-9b36-41a7-b1f2-f5a73b6ead2d in namespace provisioning-1408
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Sep 22 09:02:28.537: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-2nk74" in namespace "provisioning-1408" to be "Succeeded or Failed"
Sep 22 09:02:28.680: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 143.000476ms
Sep 22 09:02:30.824: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287204497s
Sep 22 09:02:32.971: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434161141s
Sep 22 09:02:35.114: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577539695s
Sep 22 09:02:37.264: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 8.727685057s
Sep 22 09:02:39.408: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 10.871853972s
... skipping 2 lines ...
Sep 22 09:02:45.842: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 17.305016045s
Sep 22 09:02:47.988: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 19.451051943s
Sep 22 09:02:50.132: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 21.595810646s
Sep 22 09:02:52.276: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Pending", Reason="", readiness=false. Elapsed: 23.739664493s
Sep 22 09:02:54.420: INFO: Pod "pvc-volume-tester-writer-2nk74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.883502116s
STEP: Saw pod success
Sep 22 09:02:54.420: INFO: Pod "pvc-volume-tester-writer-2nk74" satisfied condition "Succeeded or Failed"
Sep 22 09:02:54.709: INFO: Pod pvc-volume-tester-writer-2nk74 has the following logs: 
Sep 22 09:02:54.709: INFO: Deleting pod "pvc-volume-tester-writer-2nk74" in namespace "provisioning-1408"
Sep 22 09:02:54.858: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-2nk74" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-38-78.sa-east-1.compute.internal"
Sep 22 09:02:55.437: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-97qgd" in namespace "provisioning-1408" to be "Succeeded or Failed"
Sep 22 09:02:55.580: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 143.295356ms
Sep 22 09:02:57.724: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28694091s
Sep 22 09:02:59.869: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431991371s
Sep 22 09:03:02.013: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576302984s
Sep 22 09:03:04.158: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72124556s
Sep 22 09:03:06.305: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.868046403s
Sep 22 09:03:08.448: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.011743873s
Sep 22 09:03:10.592: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.155194918s
Sep 22 09:03:12.735: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.298746903s
Sep 22 09:03:14.879: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.442436921s
Sep 22 09:03:17.023: INFO: Pod "pvc-volume-tester-reader-97qgd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.58642541s
STEP: Saw pod success
Sep 22 09:03:17.023: INFO: Pod "pvc-volume-tester-reader-97qgd" satisfied condition "Succeeded or Failed"
Sep 22 09:03:17.312: INFO: Pod pvc-volume-tester-reader-97qgd has the following logs: hello world

Sep 22 09:03:17.312: INFO: Deleting pod "pvc-volume-tester-reader-97qgd" in namespace "provisioning-1408"
Sep 22 09:03:17.463: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-97qgd" to be fully deleted
Sep 22 09:03:17.613: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-86bkv] to have phase Bound
Sep 22 09:03:17.756: INFO: PersistentVolumeClaim pvc-86bkv found and phase=Bound (143.268755ms)
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":10,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:49.789: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 124 lines ...
• [SLOW TEST:15.934 seconds]
[sig-node] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":88,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":70,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:51.407: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":4,"skipped":34,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:03:24.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:26.593 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:13.323 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:03:54.937: INFO: Only supported for providers [vsphere] (not aws)
... skipping 90 lines ...
Sep 22 09:03:51.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 22 09:03:52.282: INFO: Waiting up to 5m0s for pod "pod-4a936c8b-fc3b-4ac3-a7b1-cded7be06e37" in namespace "emptydir-6468" to be "Succeeded or Failed"
Sep 22 09:03:52.426: INFO: Pod "pod-4a936c8b-fc3b-4ac3-a7b1-cded7be06e37": Phase="Pending", Reason="", readiness=false. Elapsed: 143.747235ms
Sep 22 09:03:54.570: INFO: Pod "pod-4a936c8b-fc3b-4ac3-a7b1-cded7be06e37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288333837s
STEP: Saw pod success
Sep 22 09:03:54.570: INFO: Pod "pod-4a936c8b-fc3b-4ac3-a7b1-cded7be06e37" satisfied condition "Succeeded or Failed"
Sep 22 09:03:54.714: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-4a936c8b-fc3b-4ac3-a7b1-cded7be06e37 container test-container: <nil>
STEP: delete the pod
Sep 22 09:03:55.014: INFO: Waiting for pod pod-4a936c8b-fc3b-4ac3-a7b1-cded7be06e37 to disappear
Sep 22 09:03:55.158: INFO: Pod pod-4a936c8b-fc3b-4ac3-a7b1-cded7be06e37 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 28 lines ...
Sep 22 09:03:43.659: INFO: PersistentVolumeClaim pvc-chbkx found but phase is Pending instead of Bound.
Sep 22 09:03:45.803: INFO: PersistentVolumeClaim pvc-chbkx found and phase=Bound (13.019767651s)
Sep 22 09:03:45.803: INFO: Waiting up to 3m0s for PersistentVolume local-6ll9q to have phase Bound
Sep 22 09:03:45.947: INFO: PersistentVolume local-6ll9q found and phase=Bound (143.594657ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2dnh
STEP: Creating a pod to test subpath
Sep 22 09:03:46.381: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2dnh" in namespace "provisioning-2418" to be "Succeeded or Failed"
Sep 22 09:03:46.524: INFO: Pod "pod-subpath-test-preprovisionedpv-2dnh": Phase="Pending", Reason="", readiness=false. Elapsed: 143.722817ms
Sep 22 09:03:48.669: INFO: Pod "pod-subpath-test-preprovisionedpv-2dnh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288761748s
Sep 22 09:03:50.815: INFO: Pod "pod-subpath-test-preprovisionedpv-2dnh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433906247s
STEP: Saw pod success
Sep 22 09:03:50.815: INFO: Pod "pod-subpath-test-preprovisionedpv-2dnh" satisfied condition "Succeeded or Failed"
Sep 22 09:03:50.958: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-2dnh container test-container-subpath-preprovisionedpv-2dnh: <nil>
STEP: delete the pod
Sep 22 09:03:51.253: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2dnh to disappear
Sep 22 09:03:51.396: INFO: Pod pod-subpath-test-preprovisionedpv-2dnh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2dnh
Sep 22 09:03:51.396: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2dnh" in namespace "provisioning-2418"
STEP: Creating pod pod-subpath-test-preprovisionedpv-2dnh
STEP: Creating a pod to test subpath
Sep 22 09:03:51.685: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2dnh" in namespace "provisioning-2418" to be "Succeeded or Failed"
Sep 22 09:03:51.829: INFO: Pod "pod-subpath-test-preprovisionedpv-2dnh": Phase="Pending", Reason="", readiness=false. Elapsed: 144.045393ms
Sep 22 09:03:53.973: INFO: Pod "pod-subpath-test-preprovisionedpv-2dnh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288212118s
STEP: Saw pod success
Sep 22 09:03:53.973: INFO: Pod "pod-subpath-test-preprovisionedpv-2dnh" satisfied condition "Succeeded or Failed"
Sep 22 09:03:54.117: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-2dnh container test-container-subpath-preprovisionedpv-2dnh: <nil>
STEP: delete the pod
Sep 22 09:03:54.410: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2dnh to disappear
Sep 22 09:03:54.553: INFO: Pod pod-subpath-test-preprovisionedpv-2dnh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2dnh
Sep 22 09:03:54.554: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2dnh" in namespace "provisioning-2418"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":55,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:03:51.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-e23157af-abfd-40de-9240-25f9632e73d6
STEP: Creating a pod to test consume configMaps
Sep 22 09:03:52.609: INFO: Waiting up to 5m0s for pod "pod-configmaps-538a00ca-10c6-4d82-96f8-5347c8b4152e" in namespace "configmap-8443" to be "Succeeded or Failed"
Sep 22 09:03:52.754: INFO: Pod "pod-configmaps-538a00ca-10c6-4d82-96f8-5347c8b4152e": Phase="Pending", Reason="", readiness=false. Elapsed: 145.277038ms
Sep 22 09:03:54.899: INFO: Pod "pod-configmaps-538a00ca-10c6-4d82-96f8-5347c8b4152e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290082231s
Sep 22 09:03:57.044: INFO: Pod "pod-configmaps-538a00ca-10c6-4d82-96f8-5347c8b4152e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434828117s
STEP: Saw pod success
Sep 22 09:03:57.044: INFO: Pod "pod-configmaps-538a00ca-10c6-4d82-96f8-5347c8b4152e" satisfied condition "Succeeded or Failed"
Sep 22 09:03:57.187: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-configmaps-538a00ca-10c6-4d82-96f8-5347c8b4152e container configmap-volume-test: <nil>
STEP: delete the pod
Sep 22 09:03:57.485: INFO: Waiting for pod pod-configmaps-538a00ca-10c6-4d82-96f8-5347c8b4152e to disappear
Sep 22 09:03:57.629: INFO: Pod pod-configmaps-538a00ca-10c6-4d82-96f8-5347c8b4152e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.323 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:03:58.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1888" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":5,"skipped":43,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:03:44.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 78 lines ...
I0922 09:00:54.000006    5316 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0922 09:00:57.000986    5316 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Sep 22 09:00:57.447: INFO: Creating new exec pod
Sep 22 09:00:59.880: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8047 exec execpodcmk8m -- /bin/sh -x -c nslookup nodeport-service.services-8047.svc.cluster.local'
Sep 22 09:01:16.378: INFO: rc: 1
Sep 22 09:01:16.378: INFO: ExternalName service "services-8047/execpodcmk8m" failed to resolve to IP
Sep 22 09:01:18.379: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8047 exec execpodcmk8m -- /bin/sh -x -c nslookup nodeport-service.services-8047.svc.cluster.local'
Sep 22 09:01:34.897: INFO: rc: 1
Sep 22 09:01:34.897: INFO: ExternalName service "services-8047/execpodcmk8m" failed to resolve to IP
Sep 22 09:01:36.379: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8047 exec execpodcmk8m -- /bin/sh -x -c nslookup nodeport-service.services-8047.svc.cluster.local'
Sep 22 09:01:53.065: INFO: rc: 1
Sep 22 09:01:53.065: INFO: ExternalName service "services-8047/execpodcmk8m" failed to resolve to IP
Sep 22 09:01:54.378: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8047 exec execpodcmk8m -- /bin/sh -x -c nslookup nodeport-service.services-8047.svc.cluster.local'
Sep 22 09:02:10.848: INFO: rc: 1
Sep 22 09:02:10.848: INFO: ExternalName service "services-8047/execpodcmk8m" failed to resolve to IP
Sep 22 09:02:12.379: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8047 exec execpodcmk8m -- /bin/sh -x -c nslookup nodeport-service.services-8047.svc.cluster.local'
Sep 22 09:02:28.843: INFO: rc: 1
Sep 22 09:02:28.843: INFO: ExternalName service "services-8047/execpodcmk8m" failed to resolve to IP
Sep 22 09:02:30.379: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8047 exec execpodcmk8m -- /bin/sh -x -c nslookup nodeport-service.services-8047.svc.cluster.local'
Sep 22 09:02:46.900: INFO: rc: 1
Sep 22 09:02:46.900: INFO: ExternalName service "services-8047/execpodcmk8m" failed to resolve to IP
Sep 22 09:02:48.379: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8047 exec execpodcmk8m -- /bin/sh -x -c nslookup nodeport-service.services-8047.svc.cluster.local'
Sep 22 09:03:04.895: INFO: rc: 1
Sep 22 09:03:04.895: INFO: ExternalName service "services-8047/execpodcmk8m" failed to resolve to IP
Sep 22 09:03:06.379: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8047 exec execpodcmk8m -- /bin/sh -x -c nslookup nodeport-service.services-8047.svc.cluster.local'
Sep 22 09:03:22.910: INFO: rc: 1
Sep 22 09:03:22.910: INFO: ExternalName service "services-8047/execpodcmk8m" failed to resolve to IP
Sep 22 09:03:22.910: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8047 exec execpodcmk8m -- /bin/sh -x -c nslookup nodeport-service.services-8047.svc.cluster.local'
Sep 22 09:03:39.392: INFO: rc: 1
Sep 22 09:03:39.392: INFO: ExternalName service "services-8047/execpodcmk8m" failed to resolve to IP
Sep 22 09:03:39.392: FAIL: Unexpected error:
    <*errors.errorString | 0xc0002be240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 254 lines ...
• Failure [212.401 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 22 09:03:39.392: Unexpected error:
      <*errors.errorString | 0xc0002be240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1437
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":1,"skipped":24,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:04.032: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
Sep 22 09:03:15.672: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-f2pn8] to have phase Bound
Sep 22 09:03:15.815: INFO: PersistentVolumeClaim pvc-f2pn8 found and phase=Bound (143.415243ms)
STEP: Deleting the previously created pod
Sep 22 09:03:34.537: INFO: Deleting pod "pvc-volume-tester-bnrqn" in namespace "csi-mock-volumes-595"
Sep 22 09:03:34.683: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bnrqn" to be fully deleted
STEP: Checking CSI driver logs
Sep 22 09:03:39.120: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7cbf9c93-b710-4adf-9628-b4bf8e040d8b/volumes/kubernetes.io~csi/pvc-65bac96c-decb-487a-8ca8-f39bd47c7173/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-bnrqn
Sep 22 09:03:39.121: INFO: Deleting pod "pvc-volume-tester-bnrqn" in namespace "csi-mock-volumes-595"
STEP: Deleting claim pvc-f2pn8
Sep 22 09:03:39.557: INFO: Waiting up to 2m0s for PersistentVolume pvc-65bac96c-decb-487a-8ca8-f39bd47c7173 to get deleted
Sep 22 09:03:39.701: INFO: PersistentVolume pvc-65bac96c-decb-487a-8ca8-f39bd47c7173 found and phase=Released (143.337095ms)
Sep 22 09:03:41.854: INFO: PersistentVolume pvc-65bac96c-decb-487a-8ca8-f39bd47c7173 found and phase=Released (2.296993878s)
... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":4,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:11.545: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 199 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":6,"skipped":26,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 103 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":10,"skipped":62,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:03:59.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":14,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:18.430: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 190 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:18.606: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-fa5e8af5-3359-4ad3-b872-58aa67a92f70
STEP: Creating a pod to test consume secrets
Sep 22 09:04:19.528: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e43c264b-1a1f-4320-a0f2-c7d16dbd87af" in namespace "projected-6260" to be "Succeeded or Failed"
Sep 22 09:04:19.672: INFO: Pod "pod-projected-secrets-e43c264b-1a1f-4320-a0f2-c7d16dbd87af": Phase="Pending", Reason="", readiness=false. Elapsed: 143.656055ms
Sep 22 09:04:21.817: INFO: Pod "pod-projected-secrets-e43c264b-1a1f-4320-a0f2-c7d16dbd87af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28865502s
Sep 22 09:04:23.962: INFO: Pod "pod-projected-secrets-e43c264b-1a1f-4320-a0f2-c7d16dbd87af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433953583s
STEP: Saw pod success
Sep 22 09:04:23.962: INFO: Pod "pod-projected-secrets-e43c264b-1a1f-4320-a0f2-c7d16dbd87af" satisfied condition "Succeeded or Failed"
Sep 22 09:04:24.107: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-projected-secrets-e43c264b-1a1f-4320-a0f2-c7d16dbd87af container secret-volume-test: <nil>
STEP: delete the pod
Sep 22 09:04:24.406: INFO: Waiting for pod pod-projected-secrets-e43c264b-1a1f-4320-a0f2-c7d16dbd87af to disappear
Sep 22 09:04:24.550: INFO: Pod pod-projected-secrets-e43c264b-1a1f-4320-a0f2-c7d16dbd87af no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.335 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:04:18.132: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4759ff29-9f89-4009-849c-a3ff61b560d0" in namespace "downward-api-9516" to be "Succeeded or Failed"
Sep 22 09:04:18.275: INFO: Pod "downwardapi-volume-4759ff29-9f89-4009-849c-a3ff61b560d0": Phase="Pending", Reason="", readiness=false. Elapsed: 143.374621ms
Sep 22 09:04:20.419: INFO: Pod "downwardapi-volume-4759ff29-9f89-4009-849c-a3ff61b560d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287601786s
Sep 22 09:04:22.564: INFO: Pod "downwardapi-volume-4759ff29-9f89-4009-849c-a3ff61b560d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.431887311s
Sep 22 09:04:24.709: INFO: Pod "downwardapi-volume-4759ff29-9f89-4009-849c-a3ff61b560d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577358484s
STEP: Saw pod success
Sep 22 09:04:24.709: INFO: Pod "downwardapi-volume-4759ff29-9f89-4009-849c-a3ff61b560d0" satisfied condition "Succeeded or Failed"
Sep 22 09:04:24.853: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod downwardapi-volume-4759ff29-9f89-4009-849c-a3ff61b560d0 container client-container: <nil>
STEP: delete the pod
Sep 22 09:04:25.145: INFO: Waiting for pod downwardapi-volume-4759ff29-9f89-4009-849c-a3ff61b560d0 to disappear
Sep 22 09:04:25.288: INFO: Pod downwardapi-volume-4759ff29-9f89-4009-849c-a3ff61b560d0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.354 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:25.639: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:04:28.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "apf-683" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":12,"skipped":68,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:04:24.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 22 09:04:25.734: INFO: Waiting up to 5m0s for pod "pod-0a0a7e88-5744-432e-908a-9af588af5093" in namespace "emptydir-9" to be "Succeeded or Failed"
Sep 22 09:04:25.878: INFO: Pod "pod-0a0a7e88-5744-432e-908a-9af588af5093": Phase="Pending", Reason="", readiness=false. Elapsed: 143.842569ms
Sep 22 09:04:28.022: INFO: Pod "pod-0a0a7e88-5744-432e-908a-9af588af5093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288639079s
Sep 22 09:04:30.168: INFO: Pod "pod-0a0a7e88-5744-432e-908a-9af588af5093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433717762s
STEP: Saw pod success
Sep 22 09:04:30.168: INFO: Pod "pod-0a0a7e88-5744-432e-908a-9af588af5093" satisfied condition "Succeeded or Failed"
Sep 22 09:04:30.312: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-0a0a7e88-5744-432e-908a-9af588af5093 container test-container: <nil>
STEP: delete the pod
Sep 22 09:04:30.611: INFO: Waiting for pod pod-0a0a7e88-5744-432e-908a-9af588af5093 to disappear
Sep 22 09:04:30.755: INFO: Pod pod-0a0a7e88-5744-432e-908a-9af588af5093 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.184 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:50.522 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":9,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:32.527: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":6,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:33.066: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 123 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity unused
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":9,"skipped":64,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:34.400: INFO: Only supported for providers [vsphere] (not aws)
... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:34.981: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 68 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:04:31.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-834fd569-198f-4732-a6f5-6bfb4f2806dd" in namespace "projected-6403" to be "Succeeded or Failed"
Sep 22 09:04:32.073: INFO: Pod "downwardapi-volume-834fd569-198f-4732-a6f5-6bfb4f2806dd": Phase="Pending", Reason="", readiness=false. Elapsed: 144.099697ms
Sep 22 09:04:34.218: INFO: Pod "downwardapi-volume-834fd569-198f-4732-a6f5-6bfb4f2806dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28955736s
Sep 22 09:04:36.367: INFO: Pod "downwardapi-volume-834fd569-198f-4732-a6f5-6bfb4f2806dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.438821378s
STEP: Saw pod success
Sep 22 09:04:36.368: INFO: Pod "downwardapi-volume-834fd569-198f-4732-a6f5-6bfb4f2806dd" satisfied condition "Succeeded or Failed"
Sep 22 09:04:36.515: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod downwardapi-volume-834fd569-198f-4732-a6f5-6bfb4f2806dd container client-container: <nil>
STEP: delete the pod
Sep 22 09:04:36.816: INFO: Waiting for pod downwardapi-volume-834fd569-198f-4732-a6f5-6bfb4f2806dd to disappear
Sep 22 09:04:36.973: INFO: Pod downwardapi-volume-834fd569-198f-4732-a6f5-6bfb4f2806dd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.200 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:37.272: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":30,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:39.001: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 46 lines ...
Sep 22 09:04:33.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Sep 22 09:04:33.955: INFO: Waiting up to 5m0s for pod "test-pod-d83dbb5e-0619-40e6-8d83-a51d41b8286f" in namespace "svcaccounts-4393" to be "Succeeded or Failed"
Sep 22 09:04:34.099: INFO: Pod "test-pod-d83dbb5e-0619-40e6-8d83-a51d41b8286f": Phase="Pending", Reason="", readiness=false. Elapsed: 143.683724ms
Sep 22 09:04:36.244: INFO: Pod "test-pod-d83dbb5e-0619-40e6-8d83-a51d41b8286f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288939212s
Sep 22 09:04:38.389: INFO: Pod "test-pod-d83dbb5e-0619-40e6-8d83-a51d41b8286f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434180408s
STEP: Saw pod success
Sep 22 09:04:38.389: INFO: Pod "test-pod-d83dbb5e-0619-40e6-8d83-a51d41b8286f" satisfied condition "Succeeded or Failed"
Sep 22 09:04:38.533: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod test-pod-d83dbb5e-0619-40e6-8d83-a51d41b8286f container agnhost-container: <nil>
STEP: delete the pod
Sep 22 09:04:38.827: INFO: Waiting for pod test-pod-d83dbb5e-0619-40e6-8d83-a51d41b8286f to disappear
Sep 22 09:04:38.970: INFO: Pod test-pod-d83dbb5e-0619-40e6-8d83-a51d41b8286f no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 25 lines ...
Sep 22 09:04:16.009: INFO: Creating new exec pod
Sep 22 09:04:19.586: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7789 exec execpodm8jm4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 22 09:04:21.074: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Sep 22 09:04:21.074: INFO: stdout: ""
Sep 22 09:04:22.075: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7789 exec execpodm8jm4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 22 09:04:25.596: INFO: rc: 1
Sep 22 09:04:25.596: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7789 exec execpodm8jm4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:04:26.075: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7789 exec execpodm8jm4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 22 09:04:27.571: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Sep 22 09:04:27.571: INFO: stdout: "externalname-service-dmt4t"
Sep 22 09:04:27.572: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7789 exec execpodm8jm4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.25.87 80'
Sep 22 09:04:31.024: INFO: rc: 1
Sep 22 09:04:31.025: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7789 exec execpodm8jm4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.25.87 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.65.25.87 80
nc: connect to 100.65.25.87 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:04:32.025: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7789 exec execpodm8jm4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.65.25.87 80'
Sep 22 09:04:33.548: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.65.25.87 80\nConnection to 100.65.25.87 80 port [tcp/http] succeeded!\n"
Sep 22 09:04:33.548: INFO: stdout: "externalname-service-dmt4t"
Sep 22 09:04:33.548: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7789 exec execpodm8jm4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.33.99 31942'
... skipping 17 lines ...
• [SLOW TEST:28.042 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":5,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:39.610: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:04:40.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-7724" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":10,"skipped":46,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:41.102: INFO: Driver "local" does not provide raw block - skipping
... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 53 lines ...
Sep 22 09:03:58.278: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathlj7nr] to have phase Bound
Sep 22 09:03:58.423: INFO: PersistentVolumeClaim csi-hostpathlj7nr found but phase is Pending instead of Bound.
Sep 22 09:04:00.570: INFO: PersistentVolumeClaim csi-hostpathlj7nr found but phase is Pending instead of Bound.
Sep 22 09:04:02.718: INFO: PersistentVolumeClaim csi-hostpathlj7nr found and phase=Bound (4.439492399s)
STEP: Creating pod pod-subpath-test-dynamicpv-vtd9
STEP: Creating a pod to test subpath
Sep 22 09:04:03.157: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vtd9" in namespace "provisioning-9098" to be "Succeeded or Failed"
Sep 22 09:04:03.301: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 144.13929ms
Sep 22 09:04:05.446: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28881187s
Sep 22 09:04:07.592: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434470459s
Sep 22 09:04:09.737: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579962657s
Sep 22 09:04:11.882: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724651074s
Sep 22 09:04:14.027: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.870120838s
Sep 22 09:04:16.175: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.017643086s
Sep 22 09:04:18.320: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.163020845s
Sep 22 09:04:20.465: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.308112327s
Sep 22 09:04:22.611: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.453756974s
Sep 22 09:04:24.756: INFO: Pod "pod-subpath-test-dynamicpv-vtd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.598916547s
STEP: Saw pod success
Sep 22 09:04:24.756: INFO: Pod "pod-subpath-test-dynamicpv-vtd9" satisfied condition "Succeeded or Failed"
Sep 22 09:04:24.901: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-vtd9 container test-container-subpath-dynamicpv-vtd9: <nil>
STEP: delete the pod
Sep 22 09:04:25.196: INFO: Waiting for pod pod-subpath-test-dynamicpv-vtd9 to disappear
Sep 22 09:04:25.341: INFO: Pod pod-subpath-test-dynamicpv-vtd9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vtd9
Sep 22 09:04:25.341: INFO: Deleting pod "pod-subpath-test-dynamicpv-vtd9" in namespace "provisioning-9098"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":93,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:04:39.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep 22 09:04:40.141: INFO: Waiting up to 5m0s for pod "security-context-ff9f5b60-4074-45b3-aa50-f948d59f2cb3" in namespace "security-context-4322" to be "Succeeded or Failed"
Sep 22 09:04:40.286: INFO: Pod "security-context-ff9f5b60-4074-45b3-aa50-f948d59f2cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 144.453975ms
Sep 22 09:04:42.430: INFO: Pod "security-context-ff9f5b60-4074-45b3-aa50-f948d59f2cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289234185s
Sep 22 09:04:44.582: INFO: Pod "security-context-ff9f5b60-4074-45b3-aa50-f948d59f2cb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.440524859s
STEP: Saw pod success
Sep 22 09:04:44.582: INFO: Pod "security-context-ff9f5b60-4074-45b3-aa50-f948d59f2cb3" satisfied condition "Succeeded or Failed"
Sep 22 09:04:44.726: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod security-context-ff9f5b60-4074-45b3-aa50-f948d59f2cb3 container test-container: <nil>
STEP: delete the pod
Sep 22 09:04:45.021: INFO: Waiting for pod security-context-ff9f5b60-4074-45b3-aa50-f948d59f2cb3 to disappear
Sep 22 09:04:45.165: INFO: Pod security-context-ff9f5b60-4074-45b3-aa50-f948d59f2cb3 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.183 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":8,"skipped":51,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:45.492: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:03:07.408: INFO: >>> kubeConfig: /root/.kube/config
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":8,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:47.943: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":38,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:48.559: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-0a93053c-045e-4e6e-ab13-baa39d58d57b
STEP: Creating a pod to test consume configMaps
Sep 22 09:04:46.592: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8979351-c790-4d75-947e-850da7d581a1" in namespace "projected-6995" to be "Succeeded or Failed"
Sep 22 09:04:46.736: INFO: Pod "pod-projected-configmaps-b8979351-c790-4d75-947e-850da7d581a1": Phase="Pending", Reason="", readiness=false. Elapsed: 143.648415ms
Sep 22 09:04:48.880: INFO: Pod "pod-projected-configmaps-b8979351-c790-4d75-947e-850da7d581a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287811725s
Sep 22 09:04:51.025: INFO: Pod "pod-projected-configmaps-b8979351-c790-4d75-947e-850da7d581a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432764885s
STEP: Saw pod success
Sep 22 09:04:51.025: INFO: Pod "pod-projected-configmaps-b8979351-c790-4d75-947e-850da7d581a1" satisfied condition "Succeeded or Failed"
Sep 22 09:04:51.170: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-projected-configmaps-b8979351-c790-4d75-947e-850da7d581a1 container agnhost-container: <nil>
STEP: delete the pod
Sep 22 09:04:51.469: INFO: Waiting for pod pod-projected-configmaps-b8979351-c790-4d75-947e-850da7d581a1 to disappear
Sep 22 09:04:51.613: INFO: Pod pod-projected-configmaps-b8979351-c790-4d75-947e-850da7d581a1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.334 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:51.913: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 72 lines ...
• [SLOW TEST:5.082 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support CSR API operations [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":4,"skipped":47,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:53.697: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 62 lines ...
Sep 22 09:04:44.379: INFO: PersistentVolumeClaim pvc-fqgb6 found but phase is Pending instead of Bound.
Sep 22 09:04:46.523: INFO: PersistentVolumeClaim pvc-fqgb6 found and phase=Bound (2.287604883s)
Sep 22 09:04:46.523: INFO: Waiting up to 3m0s for PersistentVolume local-2dqt7 to have phase Bound
Sep 22 09:04:46.666: INFO: PersistentVolume local-2dqt7 found and phase=Bound (143.100802ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j9cn
STEP: Creating a pod to test subpath
Sep 22 09:04:47.098: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j9cn" in namespace "provisioning-7787" to be "Succeeded or Failed"
Sep 22 09:04:47.241: INFO: Pod "pod-subpath-test-preprovisionedpv-j9cn": Phase="Pending", Reason="", readiness=false. Elapsed: 143.42524ms
Sep 22 09:04:49.386: INFO: Pod "pod-subpath-test-preprovisionedpv-j9cn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287500684s
Sep 22 09:04:51.529: INFO: Pod "pod-subpath-test-preprovisionedpv-j9cn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.431341589s
STEP: Saw pod success
Sep 22 09:04:51.529: INFO: Pod "pod-subpath-test-preprovisionedpv-j9cn" satisfied condition "Succeeded or Failed"
Sep 22 09:04:51.673: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-j9cn container test-container-subpath-preprovisionedpv-j9cn: <nil>
STEP: delete the pod
Sep 22 09:04:51.965: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j9cn to disappear
Sep 22 09:04:52.108: INFO: Pod pod-subpath-test-preprovisionedpv-j9cn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j9cn
Sep 22 09:04:52.108: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j9cn" in namespace "provisioning-7787"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:55.063: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 107 lines ...
Sep 22 09:03:24.648: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2523
Sep 22 09:03:24.792: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2523
Sep 22 09:03:24.937: INFO: creating *v1.StatefulSet: csi-mock-volumes-2523-1756/csi-mockplugin
Sep 22 09:03:25.083: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2523
Sep 22 09:03:25.227: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2523"
Sep 22 09:03:25.371: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2523 to register on node ip-172-20-50-246.sa-east-1.compute.internal
I0922 09:03:32.362923    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0922 09:03:32.508693    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2523","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0922 09:03:32.654555    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0922 09:03:32.798138    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0922 09:03:33.134587    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2523","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0922 09:03:33.863613    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2523"},"Error":"","FullError":null}
STEP: Creating pod
Sep 22 09:03:35.739: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Sep 22 09:03:35.885: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5kfc8] to have phase Bound
I0922 09:03:35.898479    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Sep 22 09:03:36.028: INFO: PersistentVolumeClaim pvc-5kfc8 found but phase is Pending instead of Bound.
I0922 09:03:36.042852    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741"}}},"Error":"","FullError":null}
Sep 22 09:03:38.175: INFO: PersistentVolumeClaim pvc-5kfc8 found and phase=Bound (2.29019332s)
I0922 09:03:39.013113    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep 22 09:03:39.157: INFO: >>> kubeConfig: /root/.kube/config
I0922 09:03:40.096297    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741","storage.kubernetes.io/csiProvisionerIdentity":"1632301412876-8081-csi-mock-csi-mock-volumes-2523"}},"Response":{},"Error":"","FullError":null}
I0922 09:03:40.243519    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep 22 09:03:40.392: INFO: >>> kubeConfig: /root/.kube/config
Sep 22 09:03:41.356: INFO: >>> kubeConfig: /root/.kube/config
Sep 22 09:03:42.291: INFO: >>> kubeConfig: /root/.kube/config
I0922 09:03:43.259502    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741/globalmount","target_path":"/var/lib/kubelet/pods/c7d3f12f-4c24-46cc-b818-5c128091c416/volumes/kubernetes.io~csi/pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741","storage.kubernetes.io/csiProvisionerIdentity":"1632301412876-8081-csi-mock-csi-mock-volumes-2523"}},"Response":{},"Error":"","FullError":null}
Sep 22 09:03:44.946: INFO: Deleting pod "pvc-volume-tester-cn478" in namespace "csi-mock-volumes-2523"
Sep 22 09:03:45.093: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cn478" to be fully deleted
Sep 22 09:03:47.778: INFO: >>> kubeConfig: /root/.kube/config
I0922 09:03:48.715533    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/c7d3f12f-4c24-46cc-b818-5c128091c416/volumes/kubernetes.io~csi/pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741/mount"},"Response":{},"Error":"","FullError":null}
I0922 09:03:48.893465    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0922 09:03:49.041385    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741/globalmount"},"Response":{},"Error":"","FullError":null}
I0922 09:03:53.539515    5290 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Sep 22 09:03:54.528: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5kfc8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2523", SelfLink:"", UID:"1d0dbb3b-9dca-412f-a0a3-2da39524e741", ResourceVersion:"8295", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898215, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00319ac18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00319ac30)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0029717c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0029717d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 22 09:03:54.528: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5kfc8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2523", SelfLink:"", UID:"1d0dbb3b-9dca-412f-a0a3-2da39524e741", ResourceVersion:"8297", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898215, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2523"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00319aca8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00319acc0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00319acd8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00319acf0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002971800), VolumeMode:(*v1.PersistentVolumeMode)(0xc002971810), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 22 09:03:54.528: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5kfc8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2523", SelfLink:"", UID:"1d0dbb3b-9dca-412f-a0a3-2da39524e741", ResourceVersion:"8305", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898215, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2523"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032ff6e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032ff6f8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032ff710), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032ff728)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741", StorageClassName:(*string)(0xc002971ac0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002971ad0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 22 09:03:54.528: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5kfc8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2523", SelfLink:"", UID:"1d0dbb3b-9dca-412f-a0a3-2da39524e741", ResourceVersion:"8306", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898215, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2523"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032ff758), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032ff770)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032ff788), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032ff7a0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741", StorageClassName:(*string)(0xc002971b00), VolumeMode:(*v1.PersistentVolumeMode)(0xc002971b10), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 22 09:03:54.528: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5kfc8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2523", SelfLink:"", UID:"1d0dbb3b-9dca-412f-a0a3-2da39524e741", ResourceVersion:"8856", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767898215, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc0032ff7d0), DeletionGracePeriodSeconds:(*int64)(0xc006af9c18), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2523"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032ff7e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032ff800)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032ff818), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032ff830)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1d0dbb3b-9dca-412f-a0a3-2da39524e741", StorageClassName:(*string)(0xc002971b50), VolumeMode:(*v1.PersistentVolumeMode)(0xc002971b60), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 164 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Sep 22 09:04:48.664: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2468 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Sep 22 09:04:50.339: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Sep 22 09:04:50.339: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2468 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Sep 22 09:04:51.963: INFO: rc: 255
Sep 22 09:04:51.964: INFO: got err error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2468 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0922 09:04:51.790170     195 merged_client_builder.go:163] Using in-cluster namespace
I0922 09:04:51.790644     195 merged_client_builder.go:121] Using in-cluster configuration
I0922 09:04:51.793128     195 merged_client_builder.go:121] Using in-cluster configuration
I0922 09:04:51.796874     195 merged_client_builder.go:121] Using in-cluster configuration
I0922 09:04:51.797246     195 round_trippers.go:432] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-2468/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0922 09:04:51.801960     195 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc000030a80, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x308aa00, 0xc000000003, 0x0, 0x0, 0xc0003961c0, 0x261bfd7, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x308aa00, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc00090a2e0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0002a9080, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x209ade0, 0xc0003a5ba8, 0x1f24400)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004a1080, 0xc0002351a0, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Sep 22 09:04:51.964: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2468 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Sep 22 09:04:53.564: INFO: rc: 255
Sep 22 09:04:53.564: INFO: got err error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2468 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0922 09:04:53.389747     207 merged_client_builder.go:163] Using in-cluster namespace
I0922 09:04:53.419360     207 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 29 milliseconds
I0922 09:04:53.419465     207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0922 09:04:53.444662     207 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 24 milliseconds
I0922 09:04:53.444740     207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0922 09:04:53.444757     207 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0922 09:04:53.464554     207 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 19 milliseconds
I0922 09:04:53.464647     207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0922 09:04:53.466664     207 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0922 09:04:53.466727     207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0922 09:04:53.468716     207 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0922 09:04:53.468792     207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0922 09:04:53.468959     207 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
F0922 09:04:53.469053     207 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc00030c380, 0x88, 0x1b8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x308aa00, 0xc000000003, 0x0, 0x0, 0xc00077d3b0, 0x261bfd7, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x308aa00, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0005b3eb0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0006bcc00, 0x59, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x209a140, 0xc0008d2600, 0x1f24400)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004e3080, 0xc00043cd80, 0x1, 0x3)
... skipping 24 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Sep 22 09:04:53.564: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2468 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Sep 22 09:04:55.123: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Sep 22 09:04:55.123: INFO: stdout: "I0922 09:04:55.033324     218 merged_client_builder.go:121] Using in-cluster configuration\nI0922 09:04:55.036694     218 merged_client_builder.go:121] Using in-cluster configuration\nI0922 09:04:55.040474     218 merged_client_builder.go:121] Using in-cluster configuration\nI0922 09:04:55.046922     218 round_trippers.go:454] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 5 milliseconds\nNo resources found in invalid namespace.\n"
Sep 22 09:04:55.123: INFO: stdout: I0922 09:04:55.033324     218 merged_client_builder.go:121] Using in-cluster configuration
... skipping 84 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-619122de-1d6c-4029-8fe5-75c88d9875a9
STEP: Creating a pod to test consume secrets
Sep 22 09:04:56.123: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7947d973-8259-469a-8bdf-0b42ef309fd3" in namespace "projected-7064" to be "Succeeded or Failed"
Sep 22 09:04:56.266: INFO: Pod "pod-projected-secrets-7947d973-8259-469a-8bdf-0b42ef309fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 143.336743ms
Sep 22 09:04:58.411: INFO: Pod "pod-projected-secrets-7947d973-8259-469a-8bdf-0b42ef309fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287943271s
STEP: Saw pod success
Sep 22 09:04:58.411: INFO: Pod "pod-projected-secrets-7947d973-8259-469a-8bdf-0b42ef309fd3" satisfied condition "Succeeded or Failed"
Sep 22 09:04:58.555: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-projected-secrets-7947d973-8259-469a-8bdf-0b42ef309fd3 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 22 09:04:58.859: INFO: Waiting for pod pod-projected-secrets-7947d973-8259-469a-8bdf-0b42ef309fd3 to disappear
Sep 22 09:04:59.003: INFO: Pod pod-projected-secrets-7947d973-8259-469a-8bdf-0b42ef309fd3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:04:59.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7064" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:04:59.308: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 169 lines ...
• [SLOW TEST:10.468 seconds]
[sig-scheduling] LimitRange
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":10,"skipped":82,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:02.428: INFO: Only supported for providers [azure] (not aws)
... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":10,"skipped":69,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:04.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7144" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":11,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":8,"skipped":78,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:04:58.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Sep 22 09:04:59.781: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-224ca334-412a-4262-98b0-733cac63a7ff" in namespace "security-context-test-1417" to be "Succeeded or Failed"
Sep 22 09:04:59.926: INFO: Pod "alpine-nnp-true-224ca334-412a-4262-98b0-733cac63a7ff": Phase="Pending", Reason="", readiness=false. Elapsed: 144.356792ms
Sep 22 09:05:02.071: INFO: Pod "alpine-nnp-true-224ca334-412a-4262-98b0-733cac63a7ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289557866s
Sep 22 09:05:04.216: INFO: Pod "alpine-nnp-true-224ca334-412a-4262-98b0-733cac63a7ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434848946s
Sep 22 09:05:06.367: INFO: Pod "alpine-nnp-true-224ca334-412a-4262-98b0-733cac63a7ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.585767262s
Sep 22 09:05:06.367: INFO: Pod "alpine-nnp-true-224ca334-412a-4262-98b0-733cac63a7ff" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:06.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1417" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":9,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:06.817: INFO: Only supported for providers [vsphere] (not aws)
... skipping 134 lines ...
Sep 22 09:04:53.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Sep 22 09:04:54.454: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 22 09:04:54.764: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7392" in namespace "provisioning-7392" to be "Succeeded or Failed"
Sep 22 09:04:54.908: INFO: Pod "hostpath-symlink-prep-provisioning-7392": Phase="Pending", Reason="", readiness=false. Elapsed: 144.098456ms
Sep 22 09:04:57.053: INFO: Pod "hostpath-symlink-prep-provisioning-7392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289593734s
STEP: Saw pod success
Sep 22 09:04:57.053: INFO: Pod "hostpath-symlink-prep-provisioning-7392" satisfied condition "Succeeded or Failed"
Sep 22 09:04:57.053: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7392" in namespace "provisioning-7392"
Sep 22 09:04:57.203: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7392" to be fully deleted
Sep 22 09:04:57.347: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-xzj5
STEP: Creating a pod to test subpath
Sep 22 09:04:57.494: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xzj5" in namespace "provisioning-7392" to be "Succeeded or Failed"
Sep 22 09:04:57.638: INFO: Pod "pod-subpath-test-inlinevolume-xzj5": Phase="Pending", Reason="", readiness=false. Elapsed: 144.481346ms
Sep 22 09:04:59.785: INFO: Pod "pod-subpath-test-inlinevolume-xzj5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290932669s
Sep 22 09:05:01.930: INFO: Pod "pod-subpath-test-inlinevolume-xzj5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.436579006s
STEP: Saw pod success
Sep 22 09:05:01.931: INFO: Pod "pod-subpath-test-inlinevolume-xzj5" satisfied condition "Succeeded or Failed"
Sep 22 09:05:02.075: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-xzj5 container test-container-subpath-inlinevolume-xzj5: <nil>
STEP: delete the pod
Sep 22 09:05:02.372: INFO: Waiting for pod pod-subpath-test-inlinevolume-xzj5 to disappear
Sep 22 09:05:02.517: INFO: Pod pod-subpath-test-inlinevolume-xzj5 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-xzj5
Sep 22 09:05:02.517: INFO: Deleting pod "pod-subpath-test-inlinevolume-xzj5" in namespace "provisioning-7392"
STEP: Deleting pod
Sep 22 09:05:02.662: INFO: Deleting pod "pod-subpath-test-inlinevolume-xzj5" in namespace "provisioning-7392"
Sep 22 09:05:02.952: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7392" in namespace "provisioning-7392" to be "Succeeded or Failed"
Sep 22 09:05:03.098: INFO: Pod "hostpath-symlink-prep-provisioning-7392": Phase="Pending", Reason="", readiness=false. Elapsed: 145.543221ms
Sep 22 09:05:05.245: INFO: Pod "hostpath-symlink-prep-provisioning-7392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292473841s
Sep 22 09:05:07.390: INFO: Pod "hostpath-symlink-prep-provisioning-7392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437679003s
STEP: Saw pod success
Sep 22 09:05:07.390: INFO: Pod "hostpath-symlink-prep-provisioning-7392" satisfied condition "Succeeded or Failed"
Sep 22 09:05:07.390: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7392" in namespace "provisioning-7392"
Sep 22 09:05:07.541: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7392" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:07.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7392" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":59,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:33.791 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":10,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:24.415 seconds]
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":12,"skipped":98,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:5.747 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: no PDB => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":6,"skipped":60,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:13.747: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:05:09.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-792e757e-aeca-44af-b21d-3cf99151dfff" in namespace "downward-api-1013" to be "Succeeded or Failed"
Sep 22 09:05:09.239: INFO: Pod "downwardapi-volume-792e757e-aeca-44af-b21d-3cf99151dfff": Phase="Pending", Reason="", readiness=false. Elapsed: 143.51953ms
Sep 22 09:05:11.385: INFO: Pod "downwardapi-volume-792e757e-aeca-44af-b21d-3cf99151dfff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288716553s
Sep 22 09:05:13.529: INFO: Pod "downwardapi-volume-792e757e-aeca-44af-b21d-3cf99151dfff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433397533s
STEP: Saw pod success
Sep 22 09:05:13.529: INFO: Pod "downwardapi-volume-792e757e-aeca-44af-b21d-3cf99151dfff" satisfied condition "Succeeded or Failed"
Sep 22 09:05:13.673: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod downwardapi-volume-792e757e-aeca-44af-b21d-3cf99151dfff container client-container: <nil>
STEP: delete the pod
Sep 22 09:05:13.967: INFO: Waiting for pod downwardapi-volume-792e757e-aeca-44af-b21d-3cf99151dfff to disappear
Sep 22 09:05:14.112: INFO: Pod downwardapi-volume-792e757e-aeca-44af-b21d-3cf99151dfff no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.168 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":75,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 107 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436
    should not modify fsGroup if fsGroupPolicy=None
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":6,"skipped":8,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Sep 22 09:05:15.290: INFO: Waiting up to 5m0s for pod "metadata-volume-2c30897d-1f2b-4aa3-8c89-ae455560a210" in namespace "projected-7616" to be "Succeeded or Failed"
Sep 22 09:05:15.433: INFO: Pod "metadata-volume-2c30897d-1f2b-4aa3-8c89-ae455560a210": Phase="Pending", Reason="", readiness=false. Elapsed: 143.371577ms
Sep 22 09:05:17.577: INFO: Pod "metadata-volume-2c30897d-1f2b-4aa3-8c89-ae455560a210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287034317s
Sep 22 09:05:19.721: INFO: Pod "metadata-volume-2c30897d-1f2b-4aa3-8c89-ae455560a210": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431130271s
Sep 22 09:05:21.865: INFO: Pod "metadata-volume-2c30897d-1f2b-4aa3-8c89-ae455560a210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.57470679s
STEP: Saw pod success
Sep 22 09:05:21.865: INFO: Pod "metadata-volume-2c30897d-1f2b-4aa3-8c89-ae455560a210" satisfied condition "Succeeded or Failed"
Sep 22 09:05:22.008: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod metadata-volume-2c30897d-1f2b-4aa3-8c89-ae455560a210 container client-container: <nil>
STEP: delete the pod
Sep 22 09:05:22.311: INFO: Waiting for pod metadata-volume-2c30897d-1f2b-4aa3-8c89-ae455560a210 to disappear
Sep 22 09:05:22.457: INFO: Pod metadata-volume-2c30897d-1f2b-4aa3-8c89-ae455560a210 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.319 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":80,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":9,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:04:58.313: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Sep 22 09:05:14.393: INFO: PersistentVolumeClaim pvc-w8cms found but phase is Pending instead of Bound.
Sep 22 09:05:16.537: INFO: PersistentVolumeClaim pvc-w8cms found and phase=Bound (13.020374508s)
Sep 22 09:05:16.537: INFO: Waiting up to 3m0s for PersistentVolume local-h49qq to have phase Bound
Sep 22 09:05:16.681: INFO: PersistentVolume local-h49qq found and phase=Bound (143.592247ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zd4p
STEP: Creating a pod to test subpath
Sep 22 09:05:17.115: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zd4p" in namespace "provisioning-3559" to be "Succeeded or Failed"
Sep 22 09:05:17.260: INFO: Pod "pod-subpath-test-preprovisionedpv-zd4p": Phase="Pending", Reason="", readiness=false. Elapsed: 144.011117ms
Sep 22 09:05:19.405: INFO: Pod "pod-subpath-test-preprovisionedpv-zd4p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289607127s
Sep 22 09:05:21.550: INFO: Pod "pod-subpath-test-preprovisionedpv-zd4p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.43438328s
STEP: Saw pod success
Sep 22 09:05:21.550: INFO: Pod "pod-subpath-test-preprovisionedpv-zd4p" satisfied condition "Succeeded or Failed"
Sep 22 09:05:21.694: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-zd4p container test-container-subpath-preprovisionedpv-zd4p: <nil>
STEP: delete the pod
Sep 22 09:05:21.991: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zd4p to disappear
Sep 22 09:05:22.134: INFO: Pod pod-subpath-test-preprovisionedpv-zd4p no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zd4p
Sep 22 09:05:22.134: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zd4p" in namespace "provisioning-3559"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":80,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:24.122: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 39 lines ...
Sep 22 09:05:15.142: INFO: PersistentVolumeClaim pvc-tgsxd found but phase is Pending instead of Bound.
Sep 22 09:05:17.287: INFO: PersistentVolumeClaim pvc-tgsxd found and phase=Bound (4.437795665s)
Sep 22 09:05:17.287: INFO: Waiting up to 3m0s for PersistentVolume local-2l7wv to have phase Bound
Sep 22 09:05:17.433: INFO: PersistentVolume local-2l7wv found and phase=Bound (145.673133ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9vh5
STEP: Creating a pod to test subpath
Sep 22 09:05:17.875: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9vh5" in namespace "provisioning-5094" to be "Succeeded or Failed"
Sep 22 09:05:18.019: INFO: Pod "pod-subpath-test-preprovisionedpv-9vh5": Phase="Pending", Reason="", readiness=false. Elapsed: 144.317076ms
Sep 22 09:05:20.165: INFO: Pod "pod-subpath-test-preprovisionedpv-9vh5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290237515s
Sep 22 09:05:22.313: INFO: Pod "pod-subpath-test-preprovisionedpv-9vh5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437815381s
STEP: Saw pod success
Sep 22 09:05:22.313: INFO: Pod "pod-subpath-test-preprovisionedpv-9vh5" satisfied condition "Succeeded or Failed"
Sep 22 09:05:22.458: INFO: Trying to get logs from node ip-172-20-38-78.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-9vh5 container test-container-subpath-preprovisionedpv-9vh5: <nil>
STEP: delete the pod
Sep 22 09:05:22.759: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9vh5 to disappear
Sep 22 09:05:22.903: INFO: Pod pod-subpath-test-preprovisionedpv-9vh5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9vh5
Sep 22 09:05:22.903: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9vh5" in namespace "provisioning-5094"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":13,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:24.909: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":11,"skipped":80,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:27.075: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 114 lines ...
• [SLOW TEST:9.022 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":11,"skipped":84,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 96 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:05:24.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:33.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1033" for this suite.


• [SLOW TEST:9.314 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":14,"skipped":104,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:34.267: INFO: Only supported for providers [gce gke] (not aws)
... skipping 62 lines ...
• [SLOW TEST:13.018 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":13,"skipped":85,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:35.812: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 44 lines ...
Sep 22 09:05:35.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 22 09:05:36.696: INFO: Waiting up to 5m0s for pod "downward-api-526f4934-c825-4f8e-ae85-ac4896ccbcaa" in namespace "downward-api-6544" to be "Succeeded or Failed"
Sep 22 09:05:36.839: INFO: Pod "downward-api-526f4934-c825-4f8e-ae85-ac4896ccbcaa": Phase="Pending", Reason="", readiness=false. Elapsed: 143.33126ms
Sep 22 09:05:38.982: INFO: Pod "downward-api-526f4934-c825-4f8e-ae85-ac4896ccbcaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286604013s
STEP: Saw pod success
Sep 22 09:05:38.982: INFO: Pod "downward-api-526f4934-c825-4f8e-ae85-ac4896ccbcaa" satisfied condition "Succeeded or Failed"
Sep 22 09:05:39.126: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod downward-api-526f4934-c825-4f8e-ae85-ac4896ccbcaa container dapi-container: <nil>
STEP: delete the pod
Sep 22 09:05:39.423: INFO: Waiting for pod downward-api-526f4934-c825-4f8e-ae85-ac4896ccbcaa to disappear
Sep 22 09:05:39.566: INFO: Pod downward-api-526f4934-c825-4f8e-ae85-ac4896ccbcaa no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:39.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6544" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":89,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:39.886: INFO: Only supported for providers [gce gke] (not aws)
... skipping 161 lines ...
• [SLOW TEST:8.163 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":15,"skipped":111,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:42.460: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:42.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7388" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":15,"skipped":106,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:42.901: INFO: Only supported for providers [vsphere] (not aws)
... skipping 14 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:05:07.954: INFO: >>> kubeConfig: /root/.kube/config
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":12,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:44.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-461" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":16,"skipped":122,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:44.425: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-d44c78d7-7482-44e2-bbab-6322e7428f40
STEP: Creating a pod to test consume secrets
Sep 22 09:05:44.478: INFO: Waiting up to 5m0s for pod "pod-secrets-294e7493-dc0a-40b2-b4be-a73425181451" in namespace "secrets-9054" to be "Succeeded or Failed"
Sep 22 09:05:44.622: INFO: Pod "pod-secrets-294e7493-dc0a-40b2-b4be-a73425181451": Phase="Pending", Reason="", readiness=false. Elapsed: 144.021627ms
Sep 22 09:05:46.768: INFO: Pod "pod-secrets-294e7493-dc0a-40b2-b4be-a73425181451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289398456s
STEP: Saw pod success
Sep 22 09:05:46.768: INFO: Pod "pod-secrets-294e7493-dc0a-40b2-b4be-a73425181451" satisfied condition "Succeeded or Failed"
Sep 22 09:05:46.918: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-secrets-294e7493-dc0a-40b2-b4be-a73425181451 container secret-volume-test: <nil>
STEP: delete the pod
Sep 22 09:05:47.232: INFO: Waiting for pod pod-secrets-294e7493-dc0a-40b2-b4be-a73425181451 to disappear
Sep 22 09:05:47.376: INFO: Pod pod-secrets-294e7493-dc0a-40b2-b4be-a73425181451 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:47.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9054" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:47.686: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "services-3644" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 9 lines ...
Sep 22 09:05:19.019: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-3001wrgwv
STEP: creating a claim
Sep 22 09:05:19.164: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Sep 22 09:05:19.461: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Sep 22 09:05:19.750: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:22.038: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:24.039: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:26.039: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:28.045: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:30.042: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:32.038: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:34.039: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:36.038: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:38.041: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:40.039: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:42.044: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:44.040: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:46.041: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:48.042: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:50.041: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3001wrgwv",
  	... // 2 identical fields
  }

Sep 22 09:05:50.330: INFO: Error updating pvc awszxqmh: PersistentVolumeClaim "awszxqmh" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":7,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:51.067: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 60 lines ...
• [SLOW TEST:19.180 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":12,"skipped":85,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:52.367: INFO: Only supported for providers [gce gke] (not aws)
... skipping 41 lines ...
Sep 22 09:05:47.078: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 22 09:05:47.078: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6949 describe pod agnhost-primary-pjd6t'
Sep 22 09:05:47.893: INFO: stderr: ""
Sep 22 09:05:47.893: INFO: stdout: "Name:         agnhost-primary-pjd6t\nNamespace:    kubectl-6949\nPriority:     0\nNode:         ip-172-20-41-3.sa-east-1.compute.internal/172.20.41.3\nStart Time:   Wed, 22 Sep 2021 09:05:44 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.4.103\nIPs:\n  IP:           100.96.4.103\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://6753357718d9e3e8831cbf4fa00ba5b8fc3f16bcb21daf6b7f50f2ab379f40ab\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 22 Sep 2021 09:05:46 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jsmr8 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-jsmr8:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-6949/agnhost-primary-pjd6t to ip-172-20-41-3.sa-east-1.compute.internal\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
Sep 22 09:05:47.893: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6949 describe rc agnhost-primary'
Sep 22 09:05:48.856: INFO: stderr: ""
Sep 22 09:05:48.856: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-6949\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-primary-pjd6t\n"
Sep 22 09:05:48.856: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6949 describe service agnhost-primary'
Sep 22 09:05:49.812: INFO: stderr: ""
Sep 22 09:05:49.812: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-6949\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.71.210.73\nIPs:               100.71.210.73\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.4.103:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 22 09:05:49.958: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6949 describe node ip-172-20-33-99.sa-east-1.compute.internal'
Sep 22 09:05:51.509: INFO: stderr: ""
Sep 22 09:05:51.510: INFO: stdout: "Name:               ip-172-20-33-99.sa-east-1.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=sa-east-1\n                    failure-domain.beta.kubernetes.io/zone=sa-east-1a\n                    kops.k8s.io/instancegroup=nodes-sa-east-1a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-33-99.sa-east-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.medium\n                    topology.hostpath.csi/node=ip-172-20-33-99.sa-east-1.compute.internal\n                    topology.kubernetes.io/region=sa-east-1\n                    topology.kubernetes.io/zone=sa-east-1a\nAnnotations:        csi.volume.kubernetes.io/nodeid: {\"csi-hostpath-volume-expand-4123\":\"ip-172-20-33-99.sa-east-1.compute.internal\"}\n                    flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"66:6d:9d:77:da:40\"}\n                    flannel.alpha.coreos.com/backend-type: vxlan\n                    flannel.alpha.coreos.com/kube-subnet-manager: true\n                    flannel.alpha.coreos.com/public-ip: 172.20.33.99\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 22 Sep 2021 08:56:31 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-33-99.sa-east-1.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Wed, 22 Sep 2021 09:05:41 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Wed, 22 Sep 2021 08:56:44 +0000   Wed, 22 Sep 2021 08:56:44 +0000   FlannelIsUp                  Flannel is running on this node\n  MemoryPressure       False   Wed, 22 Sep 2021 09:05:42 +0000   Wed, 22 Sep 2021 08:56:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 22 Sep 2021 09:05:42 +0000   Wed, 22 Sep 2021 08:56:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 22 Sep 2021 09:05:42 +0000   Wed, 22 Sep 2021 08:56:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 22 Sep 2021 09:05:42 +0000   Wed, 22 Sep 2021 08:56:51 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:   172.20.33.99\n  ExternalIP:   18.230.24.25\n  Hostname:     ip-172-20-33-99.sa-east-1.compute.internal\n  InternalDNS:  ip-172-20-33-99.sa-east-1.compute.internal\n  ExternalDNS:  ec2-18-230-24-25.sa-east-1.compute.amazonaws.com\nCapacity:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           50319340Ki\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3977792Ki\n  pods:                        110\nAllocatable:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           46374303668\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3875392Ki\n  pods:                        110\nSystem Info:\n  Machine ID:                 ec214dee1f497e5a0c3f972da49e4684\n  System UUID:                EC2F0EFC-6030-7470-6A6E-010798C9F55F\n  Boot ID:                    ba57dada-e411-469a-ae8a-cc140d532032\n  Kernel Version:             4.14.243-185.433.amzn2.x86_64\n  OS Image:                   Amazon Linux 2\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.9\n  Kubelet Version:            v1.21.5\n  Kube-Proxy Version:         v1.21.5\nPodCIDR:                      100.96.1.0/24\nPodCIDRs:                     100.96.1.0/24\nProviderID:                   aws:///sa-east-1a/i-054e76930bbc32314\nNon-terminated Pods:          (15 in total)\n  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---\n  container-probe-7653        test-webserver-e92e2dd8-083b-4607-bb05-b17ca5f66077      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s\n  dns-3964                    dns-test-22dc9f6a-e950-4f02-bf10-8ed761142a95            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s\n  dns-5155                    dns-test-644eb84e-70d0-44bf-a059-39ec057ca819            0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s\n  gc-6521                     simpletest.deployment-9858f564d-wwz87                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s\n  kube-system                 coredns-5dc785954d-98qd6                                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m\n  kube-system                 coredns-autoscaler-84d4cfd89c-zk4b6                      20m (1%)      0 (0%)      10Mi (0%)        0 (0%)         10m\n  kube-system                 kube-flannel-ds-9n2gf                                    100m (5%)     0 (0%)      100Mi (2%)       100Mi (2%)     9m20s\n  kube-system                 kube-proxy-ip-172-20-33-99.sa-east-1.compute.internal    100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s\n  services-2263               up-down-1-g2kx9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s\n  services-2263               up-down-2-w48tw                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s\n  volume-expand-4123-945      csi-hostpath-attacher-0                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s\n  volume-expand-4123-945      csi-hostpath-provisioner-0                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s\n  volume-expand-4123-945      csi-hostpath-resizer-0                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s\n  volume-expand-4123-945      csi-hostpath-snapshotter-0                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s\n  volume-expand-4123-945      csi-hostpathplugin-0                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                    Requests    Limits\n  --------                    --------    ------\n  cpu                         320m (16%)  0 (0%)\n  memory                      180Mi (4%)  270Mi (7%)\n  ephemeral-storage           0 (0%)      0 (0%)\n  hugepages-1Gi               0 (0%)      0 (0%)\n  hugepages-2Mi               0 (0%)      0 (0%)\n  attachable-volumes-aws-ebs  0           0\nEvents:\n  Type     Reason                   Age                    From        Message\n  ----     ------                   ----                   ----        -------\n  Normal   Starting                 9m20s                  kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity      9m20s                  kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  9m20s (x2 over 9m20s)  kubelet     Node ip-172-20-33-99.sa-east-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    9m20s (x2 over 9m20s)  kubelet     Node ip-172-20-33-99.sa-east-1.compute.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     9m20s (x2 over 9m20s)  kubelet     Node ip-172-20-33-99.sa-east-1.compute.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  9m20s                  kubelet     Updated Node Allocatable limit across pods\n  Normal   Starting                 9m14s                  kube-proxy  Starting kube-proxy.\n  Normal   NodeReady                9m                     kubelet     Node ip-172-20-33-99.sa-east-1.compute.internal status is now: NodeReady\n"
... skipping 24 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:52.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-9943" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":10,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:53.263: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:05:34.177: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
Sep 22 09:05:45.294: INFO: PersistentVolumeClaim pvc-t2z72 found but phase is Pending instead of Bound.
Sep 22 09:05:47.439: INFO: PersistentVolumeClaim pvc-t2z72 found and phase=Bound (8.720857692s)
Sep 22 09:05:47.439: INFO: Waiting up to 3m0s for PersistentVolume local-hngkx to have phase Bound
Sep 22 09:05:47.583: INFO: PersistentVolume local-hngkx found and phase=Bound (143.898873ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-w8b9
STEP: Creating a pod to test subpath
Sep 22 09:05:48.015: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w8b9" in namespace "provisioning-4931" to be "Succeeded or Failed"
Sep 22 09:05:48.159: INFO: Pod "pod-subpath-test-preprovisionedpv-w8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 143.848321ms
Sep 22 09:05:50.305: INFO: Pod "pod-subpath-test-preprovisionedpv-w8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289091326s
Sep 22 09:05:52.449: INFO: Pod "pod-subpath-test-preprovisionedpv-w8b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433010152s
STEP: Saw pod success
Sep 22 09:05:52.449: INFO: Pod "pod-subpath-test-preprovisionedpv-w8b9" satisfied condition "Succeeded or Failed"
Sep 22 09:05:52.592: INFO: Trying to get logs from node ip-172-20-41-3.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-w8b9 container test-container-subpath-preprovisionedpv-w8b9: <nil>
STEP: delete the pod
Sep 22 09:05:52.887: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w8b9 to disappear
Sep 22 09:05:53.031: INFO: Pod pod-subpath-test-preprovisionedpv-w8b9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-w8b9
Sep 22 09:05:53.031: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-w8b9" in namespace "provisioning-4931"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":49,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:57.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-7316" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":11,"skipped":74,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:57.351: INFO: Only supported for providers [gce gke] (not aws)
... skipping 154 lines ...
Sep 22 09:05:46.941: INFO: Pod aws-client still exists
Sep 22 09:05:48.797: INFO: Waiting for pod aws-client to disappear
Sep 22 09:05:48.941: INFO: Pod aws-client still exists
Sep 22 09:05:50.796: INFO: Waiting for pod aws-client to disappear
Sep 22 09:05:50.939: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Sep 22 09:05:51.842: INFO: Couldn't delete PD "aws://sa-east-1a/vol-03fb9a7b0846a0d0f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03fb9a7b0846a0d0f is currently attached to i-021f849308fc74b8f
	status code: 400, request id: 754788c2-55d6-46bd-8060-87aa9dc8f284
Sep 22 09:05:57.762: INFO: Successfully deleted PD "aws://sa-east-1a/vol-03fb9a7b0846a0d0f".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:05:57.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4090" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":13,"skipped":72,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:05:58.097: INFO: Only supported for providers [gce gke] (not aws)
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":12,"skipped":74,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":16,"skipped":111,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:05:52.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":17,"skipped":111,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:06:02.999: INFO: Only supported for providers [gce gke] (not aws)
... skipping 141 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":6,"skipped":56,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:06:08.335: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
• [SLOW TEST:62.609 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":10,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:06:09.463: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 138 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":12,"skipped":112,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":18,"skipped":118,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:06:15.456: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 158 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":91,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:06:15.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-30f7cf84-e9eb-476e-be95-a1b3a27cb5d4
STEP: Creating a pod to test consume configMaps
Sep 22 09:06:16.515: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2c205d45-d520-4c44-ba9c-ba571bdca930" in namespace "projected-8638" to be "Succeeded or Failed"
Sep 22 09:06:16.661: INFO: Pod "pod-projected-configmaps-2c205d45-d520-4c44-ba9c-ba571bdca930": Phase="Pending", Reason="", readiness=false. Elapsed: 146.087568ms
Sep 22 09:06:18.807: INFO: Pod "pod-projected-configmaps-2c205d45-d520-4c44-ba9c-ba571bdca930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292014851s
Sep 22 09:06:20.952: INFO: Pod "pod-projected-configmaps-2c205d45-d520-4c44-ba9c-ba571bdca930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437060819s
STEP: Saw pod success
Sep 22 09:06:20.952: INFO: Pod "pod-projected-configmaps-2c205d45-d520-4c44-ba9c-ba571bdca930" satisfied condition "Succeeded or Failed"
Sep 22 09:06:21.096: INFO: Trying to get logs from node ip-172-20-50-246.sa-east-1.compute.internal pod pod-projected-configmaps-2c205d45-d520-4c44-ba9c-ba571bdca930 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Sep 22 09:06:21.388: INFO: Waiting for pod pod-projected-configmaps-2c205d45-d520-4c44-ba9c-ba571bdca930 to disappear
Sep 22 09:06:21.532: INFO: Pod pod-projected-configmaps-2c205d45-d520-4c44-ba9c-ba571bdca930 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.315 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":129,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:06:21.855: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
Sep 22 09:04:40.002: INFO: Creating new exec pod
Sep 22 09:04:47.592: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Sep 22 09:04:49.136: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Sep 22 09:04:49.137: INFO: stdout: "nodeport-update-service-42ff6"
Sep 22 09:04:49.137: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:04:52.593: INFO: rc: 1
Sep 22 09:04:52.593: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.70.24.90 80
nc: connect to 100.70.24.90 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:04:53.593: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:04:57.098: INFO: rc: 1
Sep 22 09:04:57.098: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80:
Command stdout:

stderr:
+ nc -v -t -w 2 100.70.24.90 80
+ echo hostName
nc: connect to 100.70.24.90 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:04:57.593: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:05:01.050: INFO: rc: 1
Sep 22 09:05:01.050: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.70.24.90 80
nc: connect to 100.70.24.90 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:05:01.594: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:05:05.068: INFO: rc: 1
Sep 22 09:05:05.069: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.70.24.90 80
nc: connect to 100.70.24.90 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:05:05.594: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:05:09.100: INFO: rc: 1
Sep 22 09:05:09.100: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.70.24.90 80
nc: connect to 100.70.24.90 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:05:09.593: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:05:11.061: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.24.90 80\nConnection to 100.70.24.90 80 port [tcp/http] succeeded!\n"
Sep 22 09:05:11.061: INFO: stdout: "nodeport-update-service-42ff6"
Sep 22 09:05:11.061: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.78 30533'
... skipping 11 lines ...
Sep 22 09:05:17.929: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 52.67.215.179 30533'
Sep 22 09:05:19.389: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 52.67.215.179 30533\nConnection to 52.67.215.179 30533 port [tcp/*] succeeded!\n"
Sep 22 09:05:19.389: INFO: stdout: "nodeport-update-service-ctbxm"
STEP: Updating NodePort service to listen TCP and UDP based requests over same Port
Sep 22 09:05:20.836: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Sep 22 09:05:24.328: INFO: rc: 1
Sep 22 09:05:24.328: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 nodeport-update-service 80
+ echo hostName
nc: connect to nodeport-update-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:05:25.329: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Sep 22 09:05:26.781: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Sep 22 09:05:26.781: INFO: stdout: "nodeport-update-service-42ff6"
Sep 22 09:05:26.781: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:05:28.239: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.24.90 80\nConnection to 100.70.24.90 80 port [tcp/http] succeeded!\n"
Sep 22 09:05:28.239: INFO: stdout: ""
Sep 22 09:05:29.240: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:05:30.775: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.24.90 80\nConnection to 100.70.24.90 80 port [tcp/http] succeeded!\n"
Sep 22 09:05:30.775: INFO: stdout: ""
Sep 22 09:05:31.240: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:05:34.695: INFO: rc: 1
Sep 22 09:05:34.695: INFO: Service reachability failing with error: error running /tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.70.24.90 80
nc: connect to 100.70.24.90 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 22 09:05:35.240: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.24.90 80'
Sep 22 09:05:36.757: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.24.90 80\nConnection to 100.70.24.90 80 port [tcp/http] succeeded!\n"
Sep 22 09:05:36.757: INFO: stdout: "nodeport-update-service-42ff6"
Sep 22 09:05:36.757: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod5lqkz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.78 30533'
... skipping 43 lines ...
• [SLOW TEST:110.040 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":10,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:06:22.601: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:06:23.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-8" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":11,"skipped":92,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:06:24.705: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 67 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":9,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 09:05:05.594: INFO: >>> kubeConfig: /root/.kube/config
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":10,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Sep 22 09:06:08.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Sep 22 09:06:09.069: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 22 09:06:09.361: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4068" in namespace "provisioning-4068" to be "Succeeded or Failed"
Sep 22 09:06:09.505: INFO: Pod "hostpath-symlink-prep-provisioning-4068": Phase="Pending", Reason="", readiness=false. Elapsed: 143.578441ms
Sep 22 09:06:11.649: INFO: Pod "hostpath-symlink-prep-provisioning-4068": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287384144s
Sep 22 09:06:13.793: INFO: Pod "hostpath-symlink-prep-provisioning-4068": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.431867841s
STEP: Saw pod success
Sep 22 09:06:13.793: INFO: Pod "hostpath-symlink-prep-provisioning-4068" satisfied condition "Succeeded or Failed"
Sep 22 09:06:13.793: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4068" in namespace "provisioning-4068"
Sep 22 09:06:13.944: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4068" to be fully deleted
Sep 22 09:06:14.088: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-985x
STEP: Creating a pod to test subpath
Sep 22 09:06:14.233: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-985x" in namespace "provisioning-4068" to be "Succeeded or Failed"
Sep 22 09:06:14.377: INFO: Pod "pod-subpath-test-inlinevolume-985x": Phase="Pending", Reason="", readiness=false. Elapsed: 143.445294ms
Sep 22 09:06:16.523: INFO: Pod "pod-subpath-test-inlinevolume-985x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289200701s
Sep 22 09:06:18.669: INFO: Pod "pod-subpath-test-inlinevolume-985x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435325223s
Sep 22 09:06:20.815: INFO: Pod "pod-subpath-test-inlinevolume-985x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.581290893s
STEP: Saw pod success
Sep 22 09:06:20.815: INFO: Pod "pod-subpath-test-inlinevolume-985x" satisfied condition "Succeeded or Failed"
Sep 22 09:06:20.958: INFO: Trying to get logs from node ip-172-20-33-99.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-985x container test-container-subpath-inlinevolume-985x: <nil>
STEP: delete the pod
Sep 22 09:06:21.261: INFO: Waiting for pod pod-subpath-test-inlinevolume-985x to disappear
Sep 22 09:06:21.404: INFO: Pod pod-subpath-test-inlinevolume-985x no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-985x
Sep 22 09:06:21.404: INFO: Deleting pod "pod-subpath-test-inlinevolume-985x" in namespace "provisioning-4068"
STEP: Deleting pod
Sep 22 09:06:21.547: INFO: Deleting pod "pod-subpath-test-inlinevolume-985x" in namespace "provisioning-4068"
Sep 22 09:06:21.835: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4068" in namespace "provisioning-4068" to be "Succeeded or Failed"
Sep 22 09:06:21.979: INFO: Pod "hostpath-symlink-prep-provisioning-4068": Phase="Pending", Reason="", readiness=false. Elapsed: 143.808651ms
Sep 22 09:06:24.123: INFO: Pod "hostpath-symlink-prep-provisioning-4068": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287956026s
Sep 22 09:06:26.268: INFO: Pod "hostpath-symlink-prep-provisioning-4068": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433154518s
Sep 22 09:06:28.412: INFO: Pod "hostpath-symlink-prep-provisioning-4068": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577236771s
STEP: Saw pod success
Sep 22 09:06:28.412: INFO: Pod "hostpath-symlink-prep-provisioning-4068" satisfied condition "Succeeded or Failed"
Sep 22 09:06:28.412: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4068" in namespace "provisioning-4068"
Sep 22 09:06:28.559: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4068" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 22 09:06:28.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4068" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":7,"skipped":58,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 22 09:06:29.025: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 54 lines ...
STEP: Destroying namespace "apply-7095" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":8,"skipped":63,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:35.039 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":12,"skipped":93,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
Sep 22 09:06:24.732: INFO: Running '/tmp/kubectl3201482586/kubectl --server=https://api.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1413 cluster-info dump'
Sep 22 09:06:32.147: INFO: stderr: ""
Sep 22 09:06:32.150: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"13391\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-33-99.sa-east-1.compute.internal\",\n                \"uid\": \"828cb512-9898-4517-ab15-f6a3b87b4230\",\n                \"resourceVersion\": \"13367\",\n                \"creationTimestamp\": \"2021-09-22T08:56:31Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"sa-east-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"sa-east-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-sa-east-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-33-99.sa-east-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-33-99.sa-east-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"sa-east-1\",\n                    \"topology.kubernetes.io/zone\": \"sa-east-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-5735\\\":\\\"ip-172-20-33-99.sa-east-1.compute.internal\\\",\\\"csi-hostpath-volume-expand-4123\\\":\\\"ip-172-20-33-99.sa-east-1.compute.internal\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"66:6d:9d:77:da:40\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.33.99\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///sa-east-1a/i-054e76930bbc32314\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3977792Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3875392Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T08:56:44Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:44Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:22Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:31Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:22Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:31Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:22Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:31Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:22Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:51Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.33.99\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"18.230.24.25\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-18-230-24-25.sa-east-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec214dee1f497e5a0c3f972da49e4684\",\n                    \"systemUUID\": \"EC2F0EFC-6030-7470-6A6E-010798C9F55F\",\n                    \"bootID\": \"ba57dada-e411-469a-ae8a-cc140d532032\",\n                    \"kernelVersion\": \"4.14.243-185.433.amzn2.x86_64\",\n                    \"osImage\": \"Amazon Linux 2\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n                        ],\n                        \"sizeBytes\": 112029652\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2\",\n                            \"k8s.gcr.io/etcd:3.4.13-0\"\n                        ],\n                        \"sizeBytes\": 86742272\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:853b221d3341add7aaadf5f81dd088ea943ab9c918766e295321294b035f3f3e\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799391\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276\",\n                            \"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4\"\n                        ],\n                        \"sizeBytes\": 24757245\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 19388223\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\"\n                        ],\n                        \"sizeBytes\": 15209393\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 13995876\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 13707249\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-hostpath-ephemeral-5735^4dbbcb22-1b84-11ec-9ae4-1a1eb9674429\",\n                    \"kubernetes.io/csi/csi-hostpath-provisioning-9098^07a8328a-1b84-11ec-acc2-32264a8387f4\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-provisioning-9098^07a8328a-1b84-11ec-acc2-32264a8387f4\",\n                        \"devicePath\": \"\"\n                    },\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-ephemeral-5735^5b037f4d-1b84-11ec-9ae4-1a1eb9674429\",\n                        \"devicePath\": \"\"\n                    },\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-ephemeral-5735^4dbbcb22-1b84-11ec-9ae4-1a1eb9674429\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-38-78.sa-east-1.compute.internal\",\n                \"uid\": \"3dafcb30-429e-4df2-a920-fc958a459d00\",\n                \"resourceVersion\": \"13377\",\n                \"creationTimestamp\": \"2021-09-22T08:56:35Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"sa-east-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"sa-east-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-sa-east-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-38-78.sa-east-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-38-78.sa-east-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"sa-east-1\",\n                    \"topology.kubernetes.io/zone\": \"sa-east-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-mock-csi-mock-volumes-2908\\\":\\\"csi-mock-csi-mock-volumes-2908\\\",\\\"csi-mock-csi-mock-volumes-5239\\\":\\\"csi-mock-csi-mock-volumes-5239\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"7a:2d:2f:27:28:75\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.38.78\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"aws:///sa-east-1a/i-021fa33d39950fda8\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3977792Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3875392Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T08:56:49Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:49Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:05:05Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:35Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:05:05Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:35Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:05:05Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:35Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:05:05Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:55Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.38.78\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"177.71.173.235\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-177-71-173-235.sa-east-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec214dee1f497e5a0c3f972da49e4684\",\n                    \"systemUUID\": \"EC2529A9-2450-F27B-6D1C-0E90A9A953E3\",\n                    \"bootID\": \"19ca5658-fdd9-45c3-a4f1-d920252beab7\",\n                    \"kernelVersion\": \"4.14.243-185.433.amzn2.x86_64\",\n                    \"osImage\": \"Amazon Linux 2\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:853b221d3341add7aaadf5f81dd088ea943ab9c918766e295321294b035f3f3e\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799391\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n                        ],\n                        \"sizeBytes\": 41902332\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 19388223\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 13995876\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 13707249\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac\",\n                            \"k8s.gcr.io/e2e-test-images/nonewprivs:1.3\"\n                        ],\n                        \"sizeBytes\": 3263463\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0\",\n                            \"gcr.io/authenticated-image-pulling/alpine:3.7\"\n                        ],\n                        \"sizeBytes\": 2110879\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5239^4\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5239^4\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-41-3.sa-east-1.compute.internal\",\n                \"uid\": \"321dc9de-f7ba-473c-934b-fa11ed58dcf5\",\n                \"resourceVersion\": \"13267\",\n                \"creationTimestamp\": \"2021-09-22T08:57:10Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"sa-east-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"sa-east-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-sa-east-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-41-3.sa-east-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-41-3.sa-east-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"sa-east-1\",\n                    \"topology.kubernetes.io/zone\": \"sa-east-1a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"8a:33:e7:e3:04:c3\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.41.3\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///sa-east-1a/i-021f849308fc74b8f\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3977792Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3875392Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T08:57:51Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:51Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:20Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:10Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:20Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:10Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:20Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:10Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:20Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:20Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.41.3\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"52.67.215.179\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-52-67-215-179.sa-east-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec214dee1f497e5a0c3f972da49e4684\",\n                    \"systemUUID\": \"EC2D224F-CDD0-5267-18E6-D23E51D6191D\",\n                    \"bootID\": \"e4ce3c59-8707-4b3e-9380-61aad28e3840\",\n                    \"kernelVersion\": \"4.14.243-185.433.amzn2.x86_64\",\n                    \"osImage\": \"Amazon Linux 2\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:853b221d3341add7aaadf5f81dd088ea943ab9c918766e295321294b035f3f3e\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799391\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n                        ],\n                        \"sizeBytes\": 41902332\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 19388223\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 13995876\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-50-246.sa-east-1.compute.internal\",\n                \"uid\": \"d986a1cb-7cf2-4ee9-9eb4-e27465dc3752\",\n                \"resourceVersion\": \"12989\",\n                \"creationTimestamp\": \"2021-09-22T08:57:02Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"sa-east-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"sa-east-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-sa-east-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-50-246.sa-east-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-50-246.sa-east-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"sa-east-1\",\n                    \"topology.kubernetes.io/zone\": \"sa-east-1a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"b2:32:72:ca:f8:6d\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.50.246\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///sa-east-1a/i-091c07c3c3420ce5f\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3977792Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3875392Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T08:57:11Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:11Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:05:52Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:02Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:05:52Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:02Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:05:52Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:02Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:05:52Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:12Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.50.246\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"18.229.163.214\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-18-229-163-214.sa-east-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec214dee1f497e5a0c3f972da49e4684\",\n                    \"systemUUID\": \"EC28CC8D-6C26-82C7-D89B-7CD37695D936\",\n                    \"bootID\": \"0f516c9b-3ae0-4461-b915-81add4ffc2d7\",\n                    \"kernelVersion\": \"4.14.243-185.433.amzn2.x86_64\",\n                    \"osImage\": \"Amazon Linux 2\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:853b221d3341add7aaadf5f81dd088ea943ab9c918766e295321294b035f3f3e\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799391\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n                        ],\n                        \"sizeBytes\": 41902332\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 19388223\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b\",\n                            \"k8s.gcr.io/e2e-test-images/nonroot:1.1\"\n                        ],\n                        \"sizeBytes\": 17748448\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 13995876\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/busybox:latest\"\n                        ],\n                        \"sizeBytes\": 1144547\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"80fe72e0-6667-4a79-8e40-01797163f5c9\",\n                \"resourceVersion\": \"13010\",\n                \"creationTimestamp\": \"2021-09-22T08:55:07Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"sa-east-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"sa-east-1a\",\n                    \"kops.k8s.io/instancegroup\": \"master-sa-east-1a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-52-88.sa-east-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.kubernetes.io/region\": \"sa-east-1\",\n                    \"topology.kubernetes.io/zone\": \"sa-east-1a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"ea:4f:87:b9:47:4b\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.52.88\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///sa-east-1a/i-05cbe2c5697061af3\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3793476Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3691076Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T08:55:39Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:55:39Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:07Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:55:07Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:07Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:55:07Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:07Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:55:07Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-22T09:06:07Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:55:47Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.52.88\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"18.231.52.252\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-18-231-52-252.sa-east-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec214dee1f497e5a0c3f972da49e4684\",\n                    \"systemUUID\": \"EC229269-3A4E-A6FC-8296-598D569D5AEE\",\n                    \"bootID\": \"69a2dece-abe7-448d-92bd-1774e8453dff\",\n                    \"kernelVersion\": \"4.14.243-185.433.amzn2.x86_64\",\n                    \"osImage\": \"Amazon Linux 2\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.5\",\n                    \"kubeProxyVersion\": \"v1.21.5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\"\n                        ],\n                        \"sizeBytes\": 172004323\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 127101402\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 121137987\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 114228758\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 113348119\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 105352393\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.21.5\"\n                        ],\n                        \"sizeBytes\": 52099384\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 25622039\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 19388223\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"5859\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6.16a7190c4e06c24c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f1105ae7-f667-4f1e-bbfd-ec0f9c6a7e7d\",\n                \"resourceVersion\": \"99\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-98qd6\",\n                \"uid\": \"7f1ca703-86d6-4945-841f-d586efbdda26\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"397\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:20Z\",\n            \"count\": 7,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6.16a7191a7ceb67f4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3aab2e44-d5d9-4647-8751-3a35d6f17092\",\n                \"resourceVersion\": \"106\",\n                \"creationTimestamp\": \"2021-09-22T08:56:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-98qd6\",\n                \"uid\": \"7f1ca703-86d6-4945-841f-d586efbdda26\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"409\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:31Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:31Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6.16a7191cf77a454a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"02edcbed-e9f6-4231-b8e1-6fcac61df9e0\",\n                \"resourceVersion\": \"133\",\n                \"creationTimestamp\": \"2021-09-22T08:56:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-98qd6\",\n                \"uid\": \"7f1ca703-86d6-4945-841f-d586efbdda26\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"573\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6.16a7191f4c33c1bc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c12952ae-afeb-4205-a195-d20f8d11fec5\",\n                \"resourceVersion\": \"149\",\n                \"creationTimestamp\": \"2021-09-22T08:56:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-98qd6\",\n                \"uid\": \"7f1ca703-86d6-4945-841f-d586efbdda26\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"625\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-98qd6 to ip-172-20-33-99.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:52Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6.16a7191f8a6ba6d1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fe043347-7a6e-44a0-9972-ce215801d62b\",\n                \"resourceVersion\": \"151\",\n                \"creationTimestamp\": \"2021-09-22T08:56:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-98qd6\",\n                \"uid\": \"7f1ca703-86d6-4945-841f-d586efbdda26\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"666\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:53Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6.16a71920159eedd7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f5f89549-eea6-4a60-b624-360a5ff3311e\",\n                \"resourceVersion\": \"154\",\n                \"creationTimestamp\": \"2021-09-22T08:56:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-98qd6\"\n            },\n            \"reason\": \"TaintManagerEviction\",\n            \"message\": \"Cancelling deletion of Pod kube-system/coredns-5dc785954d-98qd6\",\n            \"source\": {\n                \"component\": \"taint-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:55Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6.16a7192077bffc46\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a939e439-8aeb-46e1-b08f-dfee966a347d\",\n                \"resourceVersion\": \"156\",\n                \"creationTimestamp\": \"2021-09-22T08:56:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-98qd6\",\n                \"uid\": \"7f1ca703-86d6-4945-841f-d586efbdda26\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"666\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 3.981710314s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:57Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6.16a719208034deae\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"28056624-d2c7-466a-b058-9deff30a8ac2\",\n                \"resourceVersion\": \"157\",\n                \"creationTimestamp\": \"2021-09-22T08:56:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-98qd6\",\n                \"uid\": \"7f1ca703-86d6-4945-841f-d586efbdda26\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"666\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:57Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6.16a7192086db40e7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"91befb03-6215-46fd-8060-ce56d5b995cb\",\n                \"resourceVersion\": \"158\",\n                \"creationTimestamp\": \"2021-09-22T08:56:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-98qd6\",\n                \"uid\": \"7f1ca703-86d6-4945-841f-d586efbdda26\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"666\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:57Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-w5582.16a71921da9cdc82\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a416483f-b17a-4ee0-b6d8-9e8b4b213873\",\n                \"resourceVersion\": \"166\",\n                \"creationTimestamp\": \"2021-09-22T08:57:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-w5582\",\n                \"uid\": \"859a90ad-57f4-405c-bf13-6e4ae34fd457\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"726\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-w5582 to ip-172-20-38-78.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:03Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-w5582.16a719221aa78678\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dbebfc0d-c32e-49bf-a04b-bcecf07b01e1\",\n                \"resourceVersion\": \"167\",\n                \"creationTimestamp\": \"2021-09-22T08:57:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-w5582\",\n                \"uid\": \"859a90ad-57f4-405c-bf13-6e4ae34fd457\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"728\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:04Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:04Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-w5582.16a719230a4fde27\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ee01855a-3587-4876-845a-b7353b81e5e5\",\n                \"resourceVersion\": \"180\",\n                \"creationTimestamp\": \"2021-09-22T08:57:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-w5582\",\n                \"uid\": \"859a90ad-57f4-405c-bf13-6e4ae34fd457\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"728\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 4.020761936s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:08Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-w5582.16a7192312c0c179\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3496b836-3555-49ff-b2f6-11a0b283ff9d\",\n                \"resourceVersion\": \"182\",\n                \"creationTimestamp\": \"2021-09-22T08:57:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-w5582\",\n                \"uid\": \"859a90ad-57f4-405c-bf13-6e4ae34fd457\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"728\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:08Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-w5582.16a719231aa9b241\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7f1014bc-758c-423c-b7b6-9e6722696e5d\",\n                \"resourceVersion\": \"183\",\n                \"creationTimestamp\": \"2021-09-22T08:57:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-w5582\",\n                \"uid\": \"859a90ad-57f4-405c-bf13-6e4ae34fd457\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"728\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:08Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16a7190c4dec498e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6483e9d7-66ee-407e-acce-3a6148841a38\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"d9e12ddf-b816-49eb-a262-568d07e934be\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"388\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-98qd6\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16a71921da303ba1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9537a8f8-d9fb-400f-a042-b4321403d0c4\",\n                \"resourceVersion\": \"165\",\n                \"creationTimestamp\": \"2021-09-22T08:57:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"d9e12ddf-b816-49eb-a262-568d07e934be\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"724\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-w5582\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:03Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6.16a7190c51cd6189\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6e20b2c1-a60c-49ed-b063-b69d43ac1262\",\n                \"resourceVersion\": \"100\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6\",\n                \"uid\": \"8586f49e-0718-4945-a8e4-8d6fd3edc75c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"399\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:20Z\",\n            \"count\": 7,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6.16a7191a7e01685d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"72d98080-4881-48c3-8b4a-5584f9ac4bac\",\n                \"resourceVersion\": \"108\",\n                \"creationTimestamp\": \"2021-09-22T08:56:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6\",\n                \"uid\": \"8586f49e-0718-4945-a8e4-8d6fd3edc75c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"416\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:31Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:31Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6.16a7191cf84a933a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"509ef193-f501-4035-bf0b-9baf16985dc6\",\n                \"resourceVersion\": \"134\",\n                \"creationTimestamp\": \"2021-09-22T08:56:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6\",\n                \"uid\": \"8586f49e-0718-4945-a8e4-8d6fd3edc75c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"577\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6.16a7191f87c9b5d4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"afa4fada-a22f-406e-950e-64b2220f8bf4\",\n                \"resourceVersion\": \"150\",\n                \"creationTimestamp\": \"2021-09-22T08:56:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6\",\n                \"uid\": \"8586f49e-0718-4945-a8e4-8d6fd3edc75c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"626\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-84d4cfd89c-zk4b6 to ip-172-20-33-99.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:53Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6.16a7191fbffb454c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"52276e64-371c-4e7b-a4fd-6012f639e239\",\n                \"resourceVersion\": \"152\",\n                \"creationTimestamp\": \"2021-09-22T08:56:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6\",\n                \"uid\": \"8586f49e-0718-4945-a8e4-8d6fd3edc75c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"670\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:54Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6.16a71920159f11e3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"62196040-1f3c-42c8-b5ad-3515e1f9f820\",\n                \"resourceVersion\": \"155\",\n                \"creationTimestamp\": \"2021-09-22T08:56:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6\"\n            },\n            \"reason\": \"TaintManagerEviction\",\n            \"message\": \"Cancelling deletion of Pod kube-system/coredns-autoscaler-84d4cfd89c-zk4b6\",\n            \"source\": {\n                \"component\": \"taint-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:55Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6.16a71921ba4217aa\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0293283a-edd9-4fb2-be90-cbf0f013a42c\",\n                \"resourceVersion\": \"161\",\n                \"creationTimestamp\": \"2021-09-22T08:57:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6\",\n                \"uid\": \"8586f49e-0718-4945-a8e4-8d6fd3edc75c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"670\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\" in 8.493889329s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6.16a71921c2806a28\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b1a35c98-abce-4fdb-8776-95e8a66b1e85\",\n                \"resourceVersion\": \"162\",\n                \"creationTimestamp\": \"2021-09-22T08:57:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6\",\n                \"uid\": \"8586f49e-0718-4945-a8e4-8d6fd3edc75c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"670\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6.16a71921ca52148f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"37b30f9e-2b0f-4280-bd8a-a098dea9c317\",\n                \"resourceVersion\": \"163\",\n                \"creationTimestamp\": \"2021-09-22T08:57:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-zk4b6\",\n                \"uid\": \"8586f49e-0718-4945-a8e4-8d6fd3edc75c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"670\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c.16a7190c4f8e8efc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d98f447c-000f-4aee-954d-6487f47710dc\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"uid\": \"e0d9b561-c0bc-4868-b8d7-3cb01d9d5f51\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"387\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-84d4cfd89c-zk4b6\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.16a7190c4976ce7a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e344bf12-1c2c-4c30-99f7-316cae30b3df\",\n                \"resourceVersion\": \"53\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"cdd1934b-6b67-4c1d-9670-1f97affb8e55\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"224\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-84d4cfd89c to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16a7190c49aceed3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d5d0283c-6334-49f3-8be0-717086a02e8d\",\n                \"resourceVersion\": \"54\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"4b8e1cdf-43f3-4eda-9322-6a62cb1189ee\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"217\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16a71921d99b10d6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ad9e303e-b3a9-46a7-a32f-042799c41225\",\n                \"resourceVersion\": \"164\",\n                \"creationTimestamp\": \"2021-09-22T08:57:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"4b8e1cdf-43f3-4eda-9322-6a62cb1189ee\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"723\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:03Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-bvg62.16a7190c511c9b5b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7f394435-77d8-4146-9073-79f999c71332\",\n                \"resourceVersion\": \"85\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-bvg62\",\n                \"uid\": \"595c7b56-8eb0-43a0-995d-d33a41312d1f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"398\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:56Z\",\n            \"count\": 5,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-bvg62.16a719157c56baba\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"73c5c0e6-cba8-42ec-bc6c-d09d85bfdb62\",\n                \"resourceVersion\": \"90\",\n                \"creationTimestamp\": \"2021-09-22T08:56:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-bvg62\",\n                \"uid\": \"595c7b56-8eb0-43a0-995d-d33a41312d1f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"411\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-59b7d7865d-bvg62 to ip-172-20-52-88.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-bvg62.16a71915a9f04576\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f2fe09b1-0325-4af5-ac7b-a6d0063b5737\",\n                \"resourceVersion\": \"93\",\n                \"creationTimestamp\": \"2021-09-22T08:56:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-bvg62\",\n                \"uid\": \"595c7b56-8eb0-43a0-995d-d33a41312d1f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"513\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-bvg62.16a71915adaf36bd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"98749ba9-c9e1-4eb3-a81f-57713e83f28d\",\n                \"resourceVersion\": \"95\",\n                \"creationTimestamp\": \"2021-09-22T08:56:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-bvg62\",\n                \"uid\": \"595c7b56-8eb0-43a0-995d-d33a41312d1f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"513\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-bvg62.16a71915b80ac4fb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"57fe26a9-7771-47d7-8806-b0be3358f35a\",\n                \"resourceVersion\": \"97\",\n                \"creationTimestamp\": \"2021-09-22T08:56:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-bvg62\",\n                \"uid\": \"595c7b56-8eb0-43a0-995d-d33a41312d1f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"513\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:11Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d.16a7190c4df4005f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"23d517ae-1d09-4805-ab30-727c94a2b57a\",\n                \"resourceVersion\": \"61\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d\",\n                \"uid\": \"3a96c73f-2c52-4943-a820-4310841314d4\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"390\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-59b7d7865d-bvg62\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.16a7190c4a599415\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f3e32d94-92af-4b90-a811-6cf18d4f4b5e\",\n                \"resourceVersion\": \"56\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"41a302db-8d4f-4c45-bc26-54bce1e85304\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"231\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-59b7d7865d to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-52-88.sa-east-1.compute.internal.16a718fc2a42ad12\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3c93d5f2-b751-4dbc-866b-33bc2d201acd\",\n                \"resourceVersion\": \"18\",\n                \"creationTimestamp\": \"2021-09-22T08:55:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"ac6561eb34d60e9b6fdbbcea2fe56a05\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-52-88.sa-east-1.compute.internal.16a718fff4b6482e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"efe3eb8c-5a3a-4cca-abf0-7e3ae31cf06d\",\n                \"resourceVersion\": \"34\",\n                \"creationTimestamp\": \"2021-09-22T08:55:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"ac6561eb34d60e9b6fdbbcea2fe56a05\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\" in 16.281462663s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-52-88.sa-east-1.compute.internal.16a718fffa37ba8b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7ca44ecd-fafd-4a39-9978-86ed4136ab4d\",\n                \"resourceVersion\": \"36\",\n                \"creationTimestamp\": \"2021-09-22T08:55:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"ac6561eb34d60e9b6fdbbcea2fe56a05\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-52-88.sa-east-1.compute.internal.16a718fffffbbdb6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c83fb44e-bdbc-4424-ab9d-ca18bea7ae66\",\n                \"resourceVersion\": \"37\",\n                \"creationTimestamp\": \"2021-09-22T08:55:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"ac6561eb34d60e9b6fdbbcea2fe56a05\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-52-88.sa-east-1.compute.internal.16a718fc1fbb7948\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bf529cd1-f9ef-4122-9e92-c8b90618de23\",\n                \"resourceVersion\": \"17\",\n                \"creationTimestamp\": \"2021-09-22T08:55:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"c99abb15873fdfe7916b995d10ef07e1\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-52-88.sa-east-1.compute.internal.16a718ff9cc65ccf\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3cc5df19-1756-4cec-91dd-9a0229a852c6\",\n                \"resourceVersion\": \"33\",\n                \"creationTimestamp\": \"2021-09-22T08:55:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"c99abb15873fdfe7916b995d10ef07e1\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\" in 14.982732836s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:36Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-52-88.sa-east-1.compute.internal.16a718fff8fbcaab\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0405be7b-9140-44b3-a0f2-7d47255e432a\",\n                \"resourceVersion\": \"35\",\n                \"creationTimestamp\": \"2021-09-22T08:55:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"c99abb15873fdfe7916b995d10ef07e1\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-52-88.sa-east-1.compute.internal.16a71900026085bf\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"be322624-25e2-456e-acb9-b661609706fa\",\n                \"resourceVersion\": \"38\",\n                \"creationTimestamp\": \"2021-09-22T08:55:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"c99abb15873fdfe7916b995d10ef07e1\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-leader.16a71915dc5fb54a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7db39e6e-5832-438d-9aa9-8fe1a8df2734\",\n                \"resourceVersion\": \"98\",\n                \"creationTimestamp\": \"2021-09-22T08:56:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-leader\",\n                \"uid\": \"df3f2a20-2b68-4713-b63b-c979522838c0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"521\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-52-88.sa-east-1.compute.internal_b2c4ddd0-63c6-4109-9555-9b74b6f43e44 became leader\",\n            \"source\": {\n                \"component\": \"ip-172-20-52-88.sa-east-1.compute.internal_b2c4ddd0-63c6-4109-9555-9b74b6f43e44\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:11Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-ttxzm.16a719157d64dd5d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"80dcaa6a-5165-418b-8b5a-b75047a3aef1\",\n                \"resourceVersion\": \"91\",\n                \"creationTimestamp\": \"2021-09-22T08:56:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-ttxzm\",\n                \"uid\": \"e5060cc9-8d37-4af6-93cd-b2c24100f94f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"514\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kops-controller-ttxzm to ip-172-20-52-88.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-ttxzm.16a71915a864ab24\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7f9858ca-3a3e-4634-ab2d-016b92ca6435\",\n                \"resourceVersion\": \"92\",\n                \"creationTimestamp\": \"2021-09-22T08:56:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-ttxzm\",\n                \"uid\": \"e5060cc9-8d37-4af6-93cd-b2c24100f94f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"515\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-ttxzm.16a71915ac6f6e1a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"77d2dbbe-1685-4774-a208-200fae796d5c\",\n                \"resourceVersion\": \"94\",\n                \"creationTimestamp\": \"2021-09-22T08:56:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-ttxzm\",\n                \"uid\": \"e5060cc9-8d37-4af6-93cd-b2c24100f94f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"515\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-ttxzm.16a71915b55e0625\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b1a86e8d-c195-48a9-835a-8f2eec1279a2\",\n                \"resourceVersion\": \"96\",\n                \"creationTimestamp\": \"2021-09-22T08:56:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-ttxzm\",\n                \"uid\": \"e5060cc9-8d37-4af6-93cd-b2c24100f94f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"515\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller.16a719157c7ad2be\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0047df1d-52a4-48c5-80ae-e1b2e59c34d7\",\n                \"resourceVersion\": \"89\",\n                \"creationTimestamp\": \"2021-09-22T08:56:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller\",\n                \"uid\": \"290be6f9-2e7a-4964-9681-6f15c03dc080\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"407\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kops-controller-ttxzm\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal.16a718fc2a4362ee\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"790a8a75-9e38-478d-ae55-22b55a491287\",\n                \"resourceVersion\": \"42\",\n                \"creationTimestamp\": \"2021-09-22T08:55:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"0e96cb7bf34225b429e6a42f5e99a8f6\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:48Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal.16a718fd89939382\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2f478e4d-d221-46f4-8131-554e7224de41\",\n                \"resourceVersion\": \"43\",\n                \"creationTimestamp\": \"2021-09-22T08:55:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"0e96cb7bf34225b429e6a42f5e99a8f6\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:48Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal.16a718fda97f2980\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"276a4249-d58b-4066-81a5-7dcd3528bb4f\",\n                \"resourceVersion\": \"44\",\n                \"creationTimestamp\": \"2021-09-22T08:55:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"0e96cb7bf34225b429e6a42f5e99a8f6\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:49Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal.16a718fdab33969d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a2a3fadd-dfcb-484f-8991-5b9b7d772a1a\",\n                \"resourceVersion\": \"30\",\n                \"creationTimestamp\": \"2021-09-22T08:55:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"0e96cb7bf34225b429e6a42f5e99a8f6\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal.16a718fdd0663632\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"165cc79e-6a74-420d-8b0e-bca8fa589ffc\",\n                \"resourceVersion\": \"31\",\n                \"creationTimestamp\": \"2021-09-22T08:55:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"0e96cb7bf34225b429e6a42f5e99a8f6\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:28Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal.16a718fde598391f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a1fa15ed-9bac-40e2-89f4-b6b560ab5e51\",\n                \"resourceVersion\": \"32\",\n                \"creationTimestamp\": \"2021-09-22T08:55:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"0e96cb7bf34225b429e6a42f5e99a8f6\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:28Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal.16a718fc3f558f9e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dc724ee3-9601-4bf2-bdde-534c14831871\",\n                \"resourceVersion\": \"48\",\n                \"creationTimestamp\": \"2021-09-22T08:55:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"e4b1563b38f6160e1d813785ecc581d7\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:16Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal.16a718fd8994e032\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"520d3b65-2c58-4457-af18-84bddb7588c1\",\n                \"resourceVersion\": \"49\",\n                \"creationTimestamp\": \"2021-09-22T08:55:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"e4b1563b38f6160e1d813785ecc581d7\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:16Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal.16a718fd9c7f6a79\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c5e1188b-13f2-40dc-9aec-c34edc7cd60b\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-09-22T08:55:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"e4b1563b38f6160e1d813785ecc581d7\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:16Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal.16a71905263e4675\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6721dd96-fc91-4550-8fd9-f35640d18ece\",\n                \"resourceVersion\": \"46\",\n                \"creationTimestamp\": \"2021-09-22T08:55:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"e4b1563b38f6160e1d813785ecc581d7\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"BackOff\",\n            \"message\": \"Back-off restarting failed container\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:59Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:00Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal.16a7190c4abba68b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d2a6d09c-83f5-43b9-bda9-91f1308f4f80\",\n                \"resourceVersion\": \"57\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"83cfeb91-c507-4435-9cd2-1ad37604d9d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"264\"\n            },\n            \"reason\": \"NodeNotReady\",\n            \"message\": \"Node is not ready\",\n            \"source\": {\n                \"component\": \"node-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.16a719091027e5fb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0054af19-0926-4444-b237-a49678c1e6e9\",\n                \"resourceVersion\": \"51\",\n                \"creationTimestamp\": \"2021-09-22T08:55:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"f77bd46c-f73d-4eab-aadd-150b3e16d25e\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"262\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-52-88.sa-east-1.compute.internal_562a79d3-1e3f-42c8-93a0-00fadc796f61 became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:16Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns.16a7190c4c2fd90d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b95029db-7b99-4acd-ac48-52712eed5a96\",\n                \"resourceVersion\": \"59\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"PodDisruptionBudget\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns\",\n                \"uid\": \"95c637fd-f87c-4730-9137-8f633566b2dd\",\n                \"apiVersion\": \"policy/v1\",\n                \"resourceVersion\": \"220\"\n            },\n            \"reason\": \"NoPods\",\n            \"message\": \"No matching pods found\",\n            \"source\": {\n                \"component\": \"controllermanager\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-47jgp.16a71921a5ed1650\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b7fb950f-d3ab-488f-a9e6-3e2f4cef6854\",\n                \"resourceVersion\": \"160\",\n                \"creationTimestamp\": \"2021-09-22T08:57:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-47jgp\",\n                \"uid\": \"2d955d8a-0aa4-4ae6-8f1c-e31ff3d59d05\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"710\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-47jgp to ip-172-20-50-246.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-47jgp.16a719222e97eed7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c5d03b9d-4f61-4031-a74e-5d224bac95fe\",\n                \"resourceVersion\": \"203\",\n                \"creationTimestamp\": \"2021-09-22T08:57:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-47jgp\",\n                \"uid\": \"2d955d8a-0aa4-4ae6-8f1c-e31ff3d59d05\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"712\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:04Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:04Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-47jgp.16a7192364d7be7b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"40f81363-1dd5-4434-ad19-0c6910ba0d0c\",\n                \"resourceVersion\": \"204\",\n                \"creationTimestamp\": \"2021-09-22T08:57:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-47jgp\",\n                \"uid\": \"2d955d8a-0aa4-4ae6-8f1c-e31ff3d59d05\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"712\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.20510362s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:09Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:09Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-47jgp.16a719236e88ba1d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1dea7039-878f-495f-ada5-735dc0e61c1e\",\n                \"resourceVersion\": \"205\",\n                \"creationTimestamp\": \"2021-09-22T08:57:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-47jgp\",\n                \"uid\": \"2d955d8a-0aa4-4ae6-8f1c-e31ff3d59d05\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"712\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:09Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:09Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-47jgp.16a7192376249fdb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0f106424-baa4-4725-9d6c-163c1ac0cc04\",\n                \"resourceVersion\": \"206\",\n                \"creationTimestamp\": \"2021-09-22T08:57:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-47jgp\",\n                \"uid\": \"2d955d8a-0aa4-4ae6-8f1c-e31ff3d59d05\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"712\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-47jgp.16a719238c4abc90\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2d8d74ce-c7d6-4619-9aab-8b8569b1f888\",\n                \"resourceVersion\": \"207\",\n                \"creationTimestamp\": \"2021-09-22T08:57:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-47jgp\",\n                \"uid\": \"2d955d8a-0aa4-4ae6-8f1c-e31ff3d59d05\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"712\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-47jgp.16a71923902519c7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"eaec7335-888e-45df-9f38-9da89e8e0cde\",\n                \"resourceVersion\": \"208\",\n                \"creationTimestamp\": \"2021-09-22T08:57:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-47jgp\",\n                \"uid\": \"2d955d8a-0aa4-4ae6-8f1c-e31ff3d59d05\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"712\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-47jgp.16a7192397dd0953\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"69b5cea5-e237-4580-81d4-9a52fd41966b\",\n                \"resourceVersion\": \"209\",\n                \"creationTimestamp\": \"2021-09-22T08:57:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-47jgp\",\n                \"uid\": \"2d955d8a-0aa4-4ae6-8f1c-e31ff3d59d05\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"712\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-895kq.16a7191b54691fe7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ed47b9aa-08da-413c-937e-371cb3f2b196\",\n                \"resourceVersion\": \"120\",\n                \"creationTimestamp\": \"2021-09-22T08:56:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-895kq\",\n                \"uid\": \"e7c93001-8df4-40a1-9188-1b22b7d1ce5b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"597\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-895kq to ip-172-20-38-78.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:35Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-895kq.16a7191cecf0fcea\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0bee9fdf-a2e8-476c-861c-00e7dc377870\",\n                \"resourceVersion\": \"132\",\n                \"creationTimestamp\": \"2021-09-22T08:56:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-895kq\",\n                \"uid\": \"e7c93001-8df4-40a1-9188-1b22b7d1ce5b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:41Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-895kq.16a7191e3eb741e3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5ec6df02-9b57-4b6d-b7c4-a2ed976d8d2e\",\n                \"resourceVersion\": \"142\",\n                \"creationTimestamp\": \"2021-09-22T08:56:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-895kq\",\n                \"uid\": \"e7c93001-8df4-40a1-9188-1b22b7d1ce5b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.666882335s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:47Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-895kq.16a7191e488b3f32\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5086f466-3d8f-4e38-bc1d-cad6e53b69c6\",\n                \"resourceVersion\": \"143\",\n                \"creationTimestamp\": \"2021-09-22T08:56:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-895kq\",\n                \"uid\": \"e7c93001-8df4-40a1-9188-1b22b7d1ce5b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:47Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-895kq.16a7191e4ee5c9b2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a6e5f627-4d67-4c6c-b32c-319f655a1d69\",\n                \"resourceVersion\": \"144\",\n                \"creationTimestamp\": \"2021-09-22T08:56:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-895kq\",\n                \"uid\": \"e7c93001-8df4-40a1-9188-1b22b7d1ce5b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:47Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-895kq.16a7191e6f638784\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6290538e-1db0-421a-b664-8102c34805a0\",\n                \"resourceVersion\": \"145\",\n                \"creationTimestamp\": \"2021-09-22T08:56:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-895kq\",\n                \"uid\": \"e7c93001-8df4-40a1-9188-1b22b7d1ce5b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:48Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-895kq.16a7191e72c36421\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"91054f06-19c9-40f1-a653-08151dc6aab9\",\n                \"resourceVersion\": \"146\",\n                \"creationTimestamp\": \"2021-09-22T08:56:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-895kq\",\n                \"uid\": \"e7c93001-8df4-40a1-9188-1b22b7d1ce5b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:48Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-895kq.16a7191e7a8b49b8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c52af2bf-807b-4aeb-baed-e4fdbcd43d5d\",\n                \"resourceVersion\": \"147\",\n                \"creationTimestamp\": \"2021-09-22T08:56:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-895kq\",\n                \"uid\": \"e7c93001-8df4-40a1-9188-1b22b7d1ce5b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:48Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-9n2gf.16a7191a7f116c6c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7f09695c-994a-4225-bb89-6330428ccbf7\",\n                \"resourceVersion\": \"111\",\n                \"creationTimestamp\": \"2021-09-22T08:56:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-9n2gf\",\n                \"uid\": \"5b738eea-7932-49c1-91ea-f59fddb4aba0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"575\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-9n2gf to ip-172-20-33-99.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:31Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-9n2gf.16a7191bcef245af\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"266d967c-a897-42b5-b49b-de863106d2f9\",\n                \"resourceVersion\": \"127\",\n                \"creationTimestamp\": \"2021-09-22T08:56:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-9n2gf\",\n                \"uid\": \"5b738eea-7932-49c1-91ea-f59fddb4aba0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"578\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-9n2gf.16a7191d116f0325\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a76448d1-2e05-4f59-b012-01fae28b3afe\",\n                \"resourceVersion\": \"136\",\n                \"creationTimestamp\": \"2021-09-22T08:56:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-9n2gf\",\n                \"uid\": \"5b738eea-7932-49c1-91ea-f59fddb4aba0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"578\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.410392444s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-9n2gf.16a7191d1a77a7d3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"309267de-f51b-45c4-9f30-b129b10d2924\",\n                \"resourceVersion\": \"137\",\n                \"creationTimestamp\": \"2021-09-22T08:56:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-9n2gf\",\n                \"uid\": \"5b738eea-7932-49c1-91ea-f59fddb4aba0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"578\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-9n2gf.16a7191d21418ebb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"09acc9cd-ab1b-4f71-bab0-5a842b46a9eb\",\n                \"resourceVersion\": \"138\",\n                \"creationTimestamp\": \"2021-09-22T08:56:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-9n2gf\",\n                \"uid\": \"5b738eea-7932-49c1-91ea-f59fddb4aba0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"578\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-9n2gf.16a7191d252569f7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"41c04545-40fe-44e8-8f5a-9aad7b199211\",\n                \"resourceVersion\": \"139\",\n                \"creationTimestamp\": \"2021-09-22T08:56:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-9n2gf\",\n                \"uid\": \"5b738eea-7932-49c1-91ea-f59fddb4aba0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"578\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-9n2gf.16a7191d2883cbdc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a07eaaea-2204-4ae4-a4f7-5f9dcdd75d04\",\n                \"resourceVersion\": \"140\",\n                \"creationTimestamp\": \"2021-09-22T08:56:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-9n2gf\",\n                \"uid\": \"5b738eea-7932-49c1-91ea-f59fddb4aba0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"578\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-9n2gf.16a7191d2ff26ab3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"473b5607-a86d-4880-b5ab-be987a831b02\",\n                \"resourceVersion\": \"141\",\n                \"creationTimestamp\": \"2021-09-22T08:56:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-9n2gf\",\n                \"uid\": \"5b738eea-7932-49c1-91ea-f59fddb4aba0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"578\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:43Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-jc5c8.16a7190c53221a14\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3e7bd787-2127-48f4-9de0-a907e5d06f1f\",\n                \"resourceVersion\": \"66\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-jc5c8\",\n                \"uid\": \"bec49d5f-5111-4db2-ae96-279924e9b2a2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"408\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-jc5c8 to ip-172-20-52-88.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-jc5c8.16a7190c5c88b40b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"317db5a2-91e9-4985-a112-9169b2fe32ff\",\n                \"resourceVersion\": \"67\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-jc5c8\",\n                \"uid\": \"bec49d5f-5111-4db2-ae96-279924e9b2a2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"418\"\n            },\n            \"reason\": \"FailedMount\",\n            \"message\": \"MountVolume.SetUp failed for volume \\\"kube-api-access-v599q\\\" : configmap \\\"kube-root-ca.crt\\\" not found\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-jc5c8.16a7190c9b39c277\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f5b17ee9-c1bc-4504-8d41-d265c20c356b\",\n                \"resourceVersion\": \"68\",\n                \"creationTimestamp\": \"2021-09-22T08:55:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-jc5c8\",\n                \"uid\": \"bec49d5f-5111-4db2-ae96-279924e9b2a2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"418\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:31Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-jc5c8.16a7190dd2199ad5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3c142cfa-fd07-4bfb-a7e8-0ab9e5cc1ec0\",\n                \"resourceVersion\": \"72\",\n                \"creationTimestamp\": \"2021-09-22T08:55:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-jc5c8\",\n                \"uid\": \"bec49d5f-5111-4db2-ae96-279924e9b2a2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"418\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.215580487s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-jc5c8.16a7190dda91cae8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"512975c9-49c5-4d10-a1d8-a2bfd3823c4b\",\n                \"resourceVersion\": \"73\",\n                \"creationTimestamp\": \"2021-09-22T08:55:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-jc5c8\",\n                \"uid\": \"bec49d5f-5111-4db2-ae96-279924e9b2a2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"418\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-jc5c8.16a7190de3046a37\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"686347cc-c323-4946-9ad6-de9d4c2ac50c\",\n                \"resourceVersion\": \"74\",\n                \"creationTimestamp\": \"2021-09-22T08:55:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-jc5c8\",\n                \"uid\": \"bec49d5f-5111-4db2-ae96-279924e9b2a2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"418\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-jc5c8.16a7190e025924eb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e0c2a642-b463-4167-99bf-00c1cb4fed73\",\n                \"resourceVersion\": \"75\",\n                \"creationTimestamp\": \"2021-09-22T08:55:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-jc5c8\",\n                \"uid\": \"bec49d5f-5111-4db2-ae96-279924e9b2a2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"418\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-jc5c8.16a7190e06728a90\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"535ccfd6-3781-4c1e-93d0-1afc4b33384c\",\n                \"resourceVersion\": \"76\",\n                \"creationTimestamp\": \"2021-09-22T08:55:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-jc5c8\",\n                \"uid\": \"bec49d5f-5111-4db2-ae96-279924e9b2a2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"418\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-jc5c8.16a7190e0e915461\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"50633858-8c65-4a7c-b8bf-1dcd3d64d58e\",\n                \"resourceVersion\": \"77\",\n                \"creationTimestamp\": \"2021-09-22T08:55:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-jc5c8\",\n                \"uid\": \"bec49d5f-5111-4db2-ae96-279924e9b2a2\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"418\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:38Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-zk5bm.16a71923929f1988\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9bce5991-57b5-40df-b232-8ecd46e8def2\",\n                \"resourceVersion\": \"196\",\n                \"creationTimestamp\": \"2021-09-22T08:57:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-zk5bm\",\n                \"uid\": \"e824b9cf-be94-4c57-9127-cb4188575428\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"767\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-zk5bm to ip-172-20-41-3.sa-east-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-zk5bm.16a719241a07a6b8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7b105b1e-951e-40d4-81b7-d6058966631f\",\n                \"resourceVersion\": \"239\",\n                \"creationTimestamp\": \"2021-09-22T08:57:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-zk5bm\",\n                \"uid\": \"e824b9cf-be94-4c57-9127-cb4188575428\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"769\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:12Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-zk5bm.16a719254f4dfc04\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dbf12506-b722-4acb-8261-84759925b239\",\n                \"resourceVersion\": \"240\",\n                \"creationTimestamp\": \"2021-09-22T08:57:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-zk5bm\",\n                \"uid\": \"e824b9cf-be94-4c57-9127-cb4188575428\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"769\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.18873628s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:17Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-zk5bm.16a7192559c371d2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1f9ef741-3550-449b-81c4-27a5fd91ea34\",\n                \"resourceVersion\": \"241\",\n                \"creationTimestamp\": \"2021-09-22T08:57:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-zk5bm\",\n                \"uid\": \"e824b9cf-be94-4c57-9127-cb4188575428\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"769\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:18Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-zk5bm.16a719256102b844\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cc8f670a-bd7c-450c-ba0b-2c966050ed63\",\n                \"resourceVersion\": \"242\",\n                \"creationTimestamp\": \"2021-09-22T08:57:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-zk5bm\",\n                \"uid\": \"e824b9cf-be94-4c57-9127-cb4188575428\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"769\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:18Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-zk5bm.16a719257aa84ec9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0a4c0fdd-deda-4b64-a148-2c910df09c4b\",\n                \"resourceVersion\": \"247\",\n                \"creationTimestamp\": \"2021-09-22T08:57:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-zk5bm\",\n                \"uid\": \"e824b9cf-be94-4c57-9127-cb4188575428\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"769\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:18Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:49Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-zk5bm.16a719257e4043b7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8c8f7591-fa63-41be-96a9-8ed3971b41e0\",\n                \"resourceVersion\": \"248\",\n                \"creationTimestamp\": \"2021-09-22T08:57:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-zk5bm\",\n                \"uid\": \"e824b9cf-be94-4c57-9127-cb4188575428\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"769\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:18Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:49Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-zk5bm.16a7192585b48e30\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"03efd909-7d25-45c9-8009-d725e21e909e\",\n                \"resourceVersion\": \"249\",\n                \"creationTimestamp\": \"2021-09-22T08:57:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-zk5bm\",\n                \"uid\": \"e824b9cf-be94-4c57-9127-cb4188575428\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"769\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:18Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:50Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16a7190c51599719\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1fbce1cf-17e0-4905-b5a2-890f54c53de9\",\n                \"resourceVersion\": \"64\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"18a80b4d-76a4-481b-bd4c-49c57f470621\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"247\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-jc5c8\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16a7191a7e67b1f3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6de9989c-8ec0-40d7-b257-23ade2c3254b\",\n                \"resourceVersion\": \"110\",\n                \"creationTimestamp\": \"2021-09-22T08:56:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"18a80b4d-76a4-481b-bd4c-49c57f470621\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"462\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-9n2gf\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:31Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16a7191b53e7e06a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"676cfd62-25f4-46b3-97ff-45f246ad435d\",\n                \"resourceVersion\": \"119\",\n                \"creationTimestamp\": \"2021-09-22T08:56:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"18a80b4d-76a4-481b-bd4c-49c57f470621\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"581\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-895kq\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:35Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16a71921a58c2f6d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a50c3229-f6fb-40bd-8708-4a85545caf69\",\n                \"resourceVersion\": \"159\",\n                \"creationTimestamp\": \"2021-09-22T08:57:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"18a80b4d-76a4-481b-bd4c-49c57f470621\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"654\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-47jgp\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16a7192391548e39\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"25d548f6-fb6d-45bf-a89e-01f851fa543e\",\n                \"resourceVersion\": \"194\",\n                \"creationTimestamp\": \"2021-09-22T08:57:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"18a80b4d-76a4-481b-bd4c-49c57f470621\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"713\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-zk5bm\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"lastTimestamp\": \"2021-09-22T08:57:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-33-99.sa-east-1.compute.internal.16a7191bbcba2357\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8eb5f02f-2d81-450c-be35-935790d7391f\",\n                \"resourceVersion\": \"124\",\n                \"creationTimestamp\": \"2021-09-22T08:56:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-33-99.sa-east-1.compute.internal\",\n                \"uid\": \"8504dfdf7e8d2e5d38dc7c9964b31764\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:36Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-33-99.sa-east-1.compute.internal.16a7191bc19b02c1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e7144763-1099-4942-981d-e39e7ec7e9f8\",\n                \"resourceVersion\": \"125\",\n                \"creationTimestamp\": \"2021-09-22T08:56:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-33-99.sa-east-1.compute.internal\",\n                \"uid\": \"8504dfdf7e8d2e5d38dc7c9964b31764\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:36Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-33-99.sa-east-1.compute.internal.16a7191bcc0d4e5a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f234e311-b367-4b87-9258-50381089a429\",\n                \"resourceVersion\": \"126\",\n                \"creationTimestamp\": \"2021-09-22T08:56:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-33-99.sa-east-1.compute.internal\",\n                \"uid\": \"8504dfdf7e8d2e5d38dc7c9964b31764\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-33-99.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:37Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-38-78.sa-east-1.compute.internal.16a7191cdc4126be\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ed0931b6-c894-48e6-94e7-499d09aac4eb\",\n                \"resourceVersion\": \"129\",\n                \"creationTimestamp\": \"2021-09-22T08:56:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-38-78.sa-east-1.compute.internal\",\n                \"uid\": \"58f727452c7989506d7145930303d4b3\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:41Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-38-78.sa-east-1.compute.internal.16a7191ce255625a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f2c5014c-54bd-4c27-86b8-683fddde2737\",\n                \"resourceVersion\": \"130\",\n                \"creationTimestamp\": \"2021-09-22T08:56:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-38-78.sa-east-1.compute.internal\",\n                \"uid\": \"58f727452c7989506d7145930303d4b3\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:41Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-38-78.sa-east-1.compute.internal.16a7191ceaef1ba7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dfdb1b54-ce55-4460-ab2f-b4a072b12598\",\n                \"resourceVersion\": \"131\",\n                \"creationTimestamp\": \"2021-09-22T08:56:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-38-78.sa-east-1.compute.internal\",\n                \"uid\": \"58f727452c7989506d7145930303d4b3\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-78.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:41Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-41-3.sa-east-1.compute.internal.16a7191d72e394ca\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a169a68c-7200-4d18-84ec-aaeacc368a8c\",\n                \"resourceVersion\": \"227\",\n                \"creationTimestamp\": \"2021-09-22T08:57:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-41-3.sa-east-1.compute.internal\",\n                \"uid\": \"720588e0f301358d4ba15df2c6c3b00a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:44Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-41-3.sa-east-1.compute.internal.16a7191d787a59ea\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"840eda05-df92-434c-88c9-507b390a39b0\",\n                \"resourceVersion\": \"228\",\n                \"creationTimestamp\": \"2021-09-22T08:57:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-41-3.sa-east-1.compute.internal\",\n                \"uid\": \"720588e0f301358d4ba15df2c6c3b00a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:44Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-41-3.sa-east-1.compute.internal.16a7191d7e507099\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"561ca51a-410f-4cca-8bde-a41d77a148e7\",\n                \"resourceVersion\": \"229\",\n                \"creationTimestamp\": \"2021-09-22T08:57:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-41-3.sa-east-1.compute.internal\",\n                \"uid\": \"720588e0f301358d4ba15df2c6c3b00a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-41-3.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:44Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-50-246.sa-east-1.compute.internal.16a7191b8711d0dd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"018d19f3-82cf-4326-8ecc-11c5dafceca1\",\n                \"resourceVersion\": \"188\",\n                \"creationTimestamp\": \"2021-09-22T08:57:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-50-246.sa-east-1.compute.internal\",\n                \"uid\": \"c12dd3df47e5d3cfd3f4ecaf40776e72\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:35Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-50-246.sa-east-1.compute.internal.16a7191b8b5270a5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"24e2de47-7a9b-4151-b655-c9950fa31723\",\n                \"resourceVersion\": \"189\",\n                \"creationTimestamp\": \"2021-09-22T08:57:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-50-246.sa-east-1.compute.internal\",\n                \"uid\": \"c12dd3df47e5d3cfd3f4ecaf40776e72\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:36Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-50-246.sa-east-1.compute.internal.16a7191b909eebfd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ce9ae655-4f8f-4f3a-b302-ef81194dc325\",\n                \"resourceVersion\": \"190\",\n                \"creationTimestamp\": \"2021-09-22T08:57:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-50-246.sa-east-1.compute.internal\",\n                \"uid\": \"c12dd3df47e5d3cfd3f4ecaf40776e72\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-50-246.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:56:36Z\",\n            \"lastTimestamp\": \"2021-09-22T08:56:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-52-88.sa-east-1.compute.internal.16a718fc2dec4d09\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"efc8842e-4658-46d0-b9cd-8bb8eb9baffd\",\n                \"resourceVersion\": \"20\",\n                \"creationTimestamp\": \"2021-09-22T08:55:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"8c215f83d9fd74773522cb9eed1e20cb\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-52-88.sa-east-1.compute.internal.16a718fd89927f88\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"209b2ce3-c364-4d5a-950c-49c99b0daec0\",\n                \"resourceVersion\": \"24\",\n                \"creationTimestamp\": \"2021-09-22T08:55:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"8c215f83d9fd74773522cb9eed1e20cb\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-52-88.sa-east-1.compute.internal.16a718fda28b916e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"92b09359-75f3-435f-9a5a-db4843063625\",\n                \"resourceVersion\": \"28\",\n                \"creationTimestamp\": \"2021-09-22T08:55:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"8c215f83d9fd74773522cb9eed1e20cb\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-52-88.sa-east-1.compute.internal.16a718fc122dc3b2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"868804eb-ee3f-48d5-b478-a2aa1b25a579\",\n                \"resourceVersion\": \"16\",\n                \"creationTimestamp\": \"2021-09-22T08:55:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"393a644774c85169937a1fdf6366078e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler-amd64:v1.21.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:20Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-52-88.sa-east-1.compute.internal.16a718fc39464ed3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7d722336-2a55-4025-b30a-8c5cc1e2d5e3\",\n                \"resourceVersion\": \"21\",\n                \"creationTimestamp\": \"2021-09-22T08:55:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"393a644774c85169937a1fdf6366078e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-52-88.sa-east-1.compute.internal.16a718fc504b0746\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"64e67cfa-6eb4-480a-b101-414ba85e0487\",\n                \"resourceVersion\": \"23\",\n                \"creationTimestamp\": \"2021-09-22T08:55:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-52-88.sa-east-1.compute.internal\",\n                \"uid\": \"393a644774c85169937a1fdf6366078e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-88.sa-east-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"lastTimestamp\": \"2021-09-22T08:54:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.16a7190bba292033\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"23c6033b-896e-4662-98ba-f37408ec0aab\",\n                \"resourceVersion\": \"52\",\n                \"creationTimestamp\": \"2021-09-22T08:55:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"4b3519fd-ff44-4c81-8b5e-98ecc64a87d4\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"363\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-52-88.sa-east-1.compute.internal_9afa59f4-cf4c-43e9-8608-9d6341da111a became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-22T08:55:28Z\",\n            \"lastTimestamp\": \"2021-09-22T08:55:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"13410\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"13411\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f52b25ab-4948-4d77-83fc-01d50d8c9cce\",\n                \"resourceVersion\": \"219\",\n                \"creationTimestamp\": \"2021-09-22T08:55:06Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"9153\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"kube-dns\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"resourceVersion\\\":\\\"0\\\"},\\\"spec\\\":{\\\"clusterIP\\\":\\\"100.64.0.10\\\",\\\"ports\\\":[{\\\"name\\\":\\\"dns\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}],\\\"selector\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}}}\\n\",\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"100.64.0.10\",\n                \"clusterIPs\": [\n                    \"100.64.0.10\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv4\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"13412\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"290be6f9-2e7a-4964-9681-6f15c03dc080\",\n                \"resourceVersion\": \"526\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-22T08:55:09Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"kops-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"},\\\"name\\\":\\\"kops-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kops-controller\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"dns.alpha.kubernetes.io/internal\\\":\\\"kops-controller.internal.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/kops-controller\\\",\\\"--v=2\\\",\\\"--conf=/etc/kubernetes/kops-controller/config/config.yaml\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\\\",\\\"name\\\":\\\"kops-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/config/\\\",\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/pki/\\\",\\\"name\\\":\\\"kops-controller-pki\\\"}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kops.k8s.io/kops-controller-pki\\\":\\\"\\\",\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccount\\\":\\\"kops-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"kops-controller\\\"},\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/kops-controller/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"kops-controller-pki\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kops-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                            \"k8s-app\": \"kops-controller\",\n                            \"version\": \"v1.22.0-beta.1\"\n                        },\n                        \"annotations\": {\n                            \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-ec2d5c5397-b4901.test-cncf-aws.k8s.io\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"configMap\": {\n                                    \"name\": \"kops-controller\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/kubernetes/kops-controller/\",\n                                    \"type\": \"Directory\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kops-controller\",\n                                \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\",\n                                \"command\": [\n                                    \"/kops-controller\",\n                                    \"--v=2\",\n                                    \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kops-controller-config\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                                    },\n                                    {\n                                        \"name\": \"kops-controller-pki\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kops.k8s.io/kops-controller-pki\": \"\",\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"kops-controller\",\n                        \"serviceAccount\": \"kops-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 1,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 1,\n                \"numberReady\": 1,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 1,\n                \"numberAvailable\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"18a80b4d-76a4-481b-bd4c-49c57f470621\",\n                \"resourceVersion\": \"901\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-22T08:55:08Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"networking.flannel\",\n                    \"app\": \"flannel\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-app\": \"flannel\",\n                    \"role.kubernetes.io/networking\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"networking.flannel\\\",\\\"app\\\":\\\"flannel\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-app\\\":\\\"flannel\\\",\\\"role.kubernetes.io/networking\\\":\\\"1\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kube-flannel-ds\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"flannel\\\",\\\"tier\\\":\\\"node\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"flannel\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"nodeAffinity\\\":{\\\"requiredDuringSchedulingIgnoredDuringExecution\\\":{\\\"nodeSelectorTerms\\\":[{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"kubernetes.io/os\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"linux\\\"]}]}]}}},\\\"containers\\\":[{\\\"args\\\":[\\\"--ip-masq\\\",\\\"--kube-subnet-mgr\\\",\\\"--iptables-resync=5\\\"],\\\"command\\\":[\\\"/opt/bin/flanneld\\\"],\\\"env\\\":[{\\\"name\\\":\\\"POD_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"metadata.name\\\"}}},{\\\"name\\\":\\\"POD_NAMESPACE\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"metadata.namespace\\\"}}}],\\\"image\\\":\\\"quay.io/coreos/flannel:v0.13.0\\\",\\\"name\\\":\\\"kube-flannel\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"100Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"100Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_ADMIN\\\",\\\"NET_RAW\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/flannel\\\",\\\"name\\\":\\\"run\\\"},{\\\"mountPath\\\":\\\"/dev/net\\\",\\\"name\\\":\\\"dev-net\\\"},{\\\"mountPath\\\":\\\"/etc/kube-flannel/\\\",\\\"name\\\":\\\"flannel-cfg\\\"}]}],\\\"hostNetwork\\\":true,\\\"initContainers\\\":[{\\\"args\\\":[\\\"-f\\\",\\\"/etc/kube-flannel/cni-conf.json\\\",\\\"/etc/cni/net.d/10-flannel.conflist\\\"],\\\"command\\\":[\\\"cp\\\"],\\\"image\\\":\\\"quay.io/coreos/flannel:v0.13.0\\\",\\\"name\\\":\\\"install-cni\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni\\\"},{\\\"mountPath\\\":\\\"/etc/kube-flannel/\\\",\\\"name\\\":\\\"flannel-cfg\\\"}]}],\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccountName\\\":\\\"flannel\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/flannel\\\"},\\\"name\\\":\\\"run\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/dev/net\\\"},\\\"name\\\":\\\"dev-net\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\"},\\\"name\\\":\\\"cni\\\"},{\\\"configMap\\\":{\\\"name\\\":\\\"kube-flannel-cfg\\\"},\\\"name\\\":\\\"flannel-cfg\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"app\": \"flannel\",\n                        \"tier\": \"node\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"app\": \"flannel\",\n                            \"tier\": \"node\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"run\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/flannel\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"hostPath\": {\n                                    \"path\": \"/dev/net\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"cni\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/cni/net.d\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"configMap\": {\n                                    \"name\": \"kube-flannel-cfg\",\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"initContainers\": [\n                            {\n                                \"name\": \"install-cni\",\n                                \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                                \"command\": [\n                                    \"cp\"\n                                ],\n                                \"args\": [\n                                    \"-f\",\n                                    \"/etc/kube-flannel/cni-conf.json\",\n                                    \"/etc/cni/net.d/10-flannel.conflist\"\n                                ],\n                                \"resources\": {},\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cni\",\n                                        \"mountPath\": \"/etc/cni/net.d\"\n                                    },\n                                    {\n                                        \"name\": \"flannel-cfg\",\n                                        \"mountPath\": \"/etc/kube-flannel/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kube-flannel\",\n                                \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                                \"command\": [\n                                    \"/opt/bin/flanneld\"\n                                ],\n                                \"args\": [\n                                    \"--ip-masq\",\n                                    \"--kube-subnet-mgr\",\n                                    \"--iptables-resync=5\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"POD_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.name\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_NAMESPACE\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.namespace\"\n                                            }\n                                        }\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"100Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"100Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"run\",\n                                        \"mountPath\": \"/run/flannel\"\n                                    },\n                                    {\n                                        \"name\": \"dev-net\",\n                                        \"mountPath\": \"/dev/net\"\n                                    },\n                                    {\n                                        \"name\": \"flannel-cfg\",\n                                        \"mountPath\": \"/etc/kube-flannel/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_ADMIN\",\n                                            \"NET_RAW\"\n                                        ]\n                                    },\n                                    \"privileged\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"flannel\",\n                        \"serviceAccount\": \"flannel\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"nodeAffinity\": {\n                                \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                                    \"nodeSelectorTerms\": [\n                                        {\n                                            \"matchExpressions\": [\n                                                {\n                                                    \"key\": \"kubernetes.io/os\",\n                                                    \"operator\": \"In\",\n                                                    \"values\": [\n                                                        \"linux\"\n                                                    ]\n                                                }\n                                            ]\n                                        }\n                                    ]\n                                }\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": 0\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 5,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 5,\n                \"numberReady\": 5,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 5,\n                \"numberAvailable\": 5\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"13412\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4b8e1cdf-43f3-4eda-9322-6a62cb1189ee\",\n                \"resourceVersion\": \"761\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-09-22T08:55:06Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"coredns\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":\\\"10%\\\",\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"preferredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"podAffinityTerm\\\":{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kube-dns\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"weight\\\":100}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"-conf\\\",\\\"/etc/coredns/Corefile\\\"],\\\"image\\\":\\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/health\\\",\\\"port\\\":8080,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"coredns\\\",\\\"ports\\\":[{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns\\\",\\\"protocol\\\":\\\"UDP\\\"},{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns-tcp\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":9153,\\\"name\\\":\\\"metrics\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8181,\\\"scheme\\\":\\\"HTTP\\\"}},\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_BIND_SERVICE\\\"],\\\"drop\\\":[\\\"all\\\"]},\\\"readOnlyRootFilesystem\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"Corefile\\\",\\\"path\\\":\\\"Corefile\\\"}],\\\"name\\\":\\\"coredns\\\"},\\\"name\\\":\\\"config-volume\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"10%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 2,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-22T08:55:30Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:55:30Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-22T08:56:57Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:55:30Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-5dc785954d\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cdd1934b-6b67-4c1d-9670-1f97affb8e55\",\n                \"resourceVersion\": \"720\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-22T08:55:06Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"kubernetes.io/cluster-service\": \"true\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"coredns-autoscaler\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"},\\\"name\\\":\\\"coredns-autoscaler\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/cluster-proportional-autoscaler\\\",\\\"--namespace=kube-system\\\",\\\"--configmap=coredns-autoscaler\\\",\\\"--target=Deployment/coredns\\\",\\\"--default-params={\\\\\\\"linear\\\\\\\":{\\\\\\\"coresPerReplica\\\\\\\":256,\\\\\\\"nodesPerReplica\\\\\\\":16,\\\\\\\"preventSinglePointFailure\\\\\\\":true}}\\\",\\\"--logtostderr=true\\\",\\\"--v=2\\\"],\\\"image\\\":\\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\",\\\"name\\\":\\\"autoscaler\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"10Mi\\\"}}}],\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns-autoscaler\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": \"25%\",\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-22T08:57:02Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:57:02Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-22T08:57:02Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:55:30Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-autoscaler-84d4cfd89c\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"41a302db-8d4f-4c45-bc26-54bce1e85304\",\n                \"resourceVersion\": \"529\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-22T08:55:07Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"dns-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"},\\\"name\\\":\\\"dns-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"dns-controller\\\"}},\\\"strategy\\\":{\\\"type\\\":\\\"Recreate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/dns-controller\\\",\\\"--watch-ingress=false\\\",\\\"--dns=aws-route53\\\",\\\"--zone=*/ZEMLNXIIWQ0RV\\\",\\\"--zone=*/*\\\",\\\"-v=2\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\\\",\\\"name\\\":\\\"dns-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true}}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"dns-controller\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"version\": \"v1.22.0-beta.1\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"Recreate\"\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-22T08:56:11Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:56:11Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-22T08:56:11Z\",\n                        \"lastTransitionTime\": \"2021-09-22T08:55:30Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"dns-controller-59b7d7865d\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"13413\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d9e12ddf-b816-49eb-a262-568d07e934be\",\n                \"resourceVersion\": \"760\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"4b8e1cdf-43f3-4eda-9322-6a62cb1189ee\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"5dc785954d\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"5dc785954d\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 2\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e0d9b561-c0bc-4868-b8d7-3cb01d9d5f51\",\n                \"resourceVersion\": \"719\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns-autoscaler\",\n                        \"uid\": \"cdd1934b-6b67-4c1d-9670-1f97affb8e55\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\",\n                        \"pod-template-hash\": \"84d4cfd89c\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\",\n                            \"pod-template-hash\": \"84d4cfd89c\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3a96c73f-2c52-4943-a820-4310841314d4\",\n                \"resourceVersion\": \"528\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"59b7d7865d\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"1\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"dns-controller\",\n                        \"uid\": \"41a302db-8d4f-4c45-bc26-54bce1e85304\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\",\n                        \"pod-template-hash\": \"59b7d7865d\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"pod-template-hash\": \"59b7d7865d\",\n                            \"version\": \"v1.22.0-beta.1\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"13413\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-98qd6\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7f1ca703-86d6-4945-841f-d586efbdda26\",\n                \"resourceVersion\": \"691\",\n                \"creationTimestamp\": \"2021-09-22T08:55:30Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"d9e12ddf-b816-49eb-a262-568d07e934be\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-pwmv2\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-pwmv2\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-33-99.sa-east-1.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-22T08:56:52Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-22T08:56:57Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-22T08:56:57Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-22T08:56:52Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.33.99\",\n                \"podIP\": \"100.96.1.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.1.2\"\n                    }\n                ],\n                \"startTime\": \"2021-09-22T08:56:52Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-22T08:56:57Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                        \"containerID\": \"containerd://01ea0ebd8f29955d68db6eaba38a6d0f9d990e68c240da26da8dc5b7345b7e2c\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-w5582\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"859a90ad-57f4-405c-bf13-6e4ae34fd457\",\n                \"resourceVersion\": \"756\",\n                \"creationTimestamp\": \"2021-09-22T08:57:03Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"d9e12ddf-b816-49eb-a262-568d07e934be\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-dm8wn\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n