This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-02 22:57
Elapsed56m21s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 132 lines ...
I1002 22:58:13.769634    4694 http.go:37] curl https://storage.googleapis.com/kops-ci/markers/release-1.22/latest-ci-updown-green.txt
I1002 22:58:13.772111    4694 http.go:37] curl https://storage.googleapis.com/k8s-staging-kops/kops/releases/1.22.0-beta.3+v1.22.0-beta.1-149-g5d501b7917/linux/amd64/kops
I1002 22:58:14.502270    4694 up.go:43] Cleaning up any leaked resources from previous cluster
I1002 22:58:14.502322    4694 dumplogs.go:40] /logs/artifacts/0c7e36ee-23d4-11ec-b766-fab168edef1d/kops toolbox dump --name e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user
I1002 22:58:14.520792    4715 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1002 22:58:14.520915    4715 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io" not found
W1002 22:58:15.089347    4694 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1002 22:58:15.089412    4694 down.go:48] /logs/artifacts/0c7e36ee-23d4-11ec-b766-fab168edef1d/kops delete cluster --name e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --yes
I1002 22:58:15.109767    4726 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1002 22:58:15.109910    4726 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io" not found
I1002 22:58:15.731892    4694 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/02 22:58:15 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1002 22:58:15.740851    4694 http.go:37] curl https://ip.jsb.workers.dev
I1002 22:58:15.828201    4694 up.go:144] /logs/artifacts/0c7e36ee-23d4-11ec-b766-fab168edef1d/kops create cluster --name e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.2 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=309956199498/RHEL-7.9_HVM_GA-20200917-x86_64-0-Hourly2-GP2 --channel=alpha --networking=kubenet --container-runtime=containerd --admin-access 35.226.128.128/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-south-1a --master-size c5.large
I1002 22:58:15.842957    4735 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1002 22:58:15.843174    4735 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1002 22:58:15.868672    4735 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I1002 22:58:16.409304    4735 new_cluster.go:1055]  Cloud Provider ID = aws
... skipping 30 lines ...

I1002 22:58:37.086484    4694 up.go:181] /logs/artifacts/0c7e36ee-23d4-11ec-b766-fab168edef1d/kops validate cluster --name e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1002 22:58:37.100434    4755 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1002 22:58:37.100570    4755 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io

W1002 22:58:38.920219    4755 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W1002 22:58:48.967327    4755 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 22:58:59.011969    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 22:59:09.060434    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 22:59:19.091917    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 22:59:29.125186    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 22:59:39.162768    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 22:59:49.207871    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 22:59:59.241528    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:00:09.275865    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:00:19.305722    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:00:29.371712    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:00:39.410265    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:00:49.444585    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:00:59.488475    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:01:09.556575    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:01:19.594718    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:01:29.629357    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:01:39.668987    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:01:49.699032    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:01:59.731656    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:02:09.767661    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:02:19.801709    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:02:29.835163    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:02:39.869364    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:02:49.904466    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:02:59.937189    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:03:09.980782    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:03:20.015546    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 23:03:30.046579    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 7 lines ...
Machine	i-049e8578446ca957f				machine "i-049e8578446ca957f" has not yet joined cluster
Machine	i-075a98111b6649d4c				machine "i-075a98111b6649d4c" has not yet joined cluster
Machine	i-0c7b5b478d755cb9f				machine "i-0c7b5b478d755cb9f" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-882wl		system-cluster-critical pod "coredns-5dc785954d-882wl" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-jm7kb	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-jm7kb" is pending

Validation Failed
W1002 23:03:45.539928    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 10 lines ...
Pod	kube-system/coredns-5dc785954d-882wl					system-cluster-critical pod "coredns-5dc785954d-882wl" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-jm7kb				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-jm7kb" is pending
Pod	kube-system/ebs-csi-node-g6ttp						system-node-critical pod "ebs-csi-node-g6ttp" is pending
Pod	kube-system/ebs-csi-node-p84zh						system-node-critical pod "ebs-csi-node-p84zh" is pending
Pod	kube-system/kube-proxy-ip-172-20-40-74.ap-south-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-40-74.ap-south-1.compute.internal" is pending

Validation Failed
W1002 23:03:59.408758    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 10 lines ...
Node	ip-172-20-33-208.ap-south-1.compute.internal	node "ip-172-20-33-208.ap-south-1.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-882wl		system-cluster-critical pod "coredns-5dc785954d-882wl" is pending
Pod	kube-system/ebs-csi-node-6227k			system-node-critical pod "ebs-csi-node-6227k" is pending
Pod	kube-system/ebs-csi-node-7gvbx			system-node-critical pod "ebs-csi-node-7gvbx" is pending
Pod	kube-system/ebs-csi-node-g6ttp			system-node-critical pod "ebs-csi-node-g6ttp" is pending

Validation Failed
W1002 23:04:13.110906    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 21 lines ...
ip-172-20-54-138.ap-south-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-54-138.ap-south-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-54-138.ap-south-1.compute.internal" is pending

Validation Failed
W1002 23:04:40.399695    4755 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-south-1a	Master	c5.large	1	1	ap-south-1a
nodes-ap-south-1a	Node	t3.medium	4	4	ap-south-1a

... skipping 1015 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:07:31.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9271" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:07:31.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3317" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:32.199: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 156 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:803
    should apply a new configuration to an existing RC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:804
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":1,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:35.509: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
W1002 23:07:29.460379    5471 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct  2 23:07:29.460: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct  2 23:07:35.717: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:37.033: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 90 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Oct  2 23:07:30.126: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1004" to be "Succeeded or Failed"
Oct  2 23:07:30.361: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 234.800234ms
Oct  2 23:07:32.596: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470301592s
Oct  2 23:07:34.832: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.705960041s
Oct  2 23:07:37.068: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.942304478s
STEP: Saw pod success
Oct  2 23:07:37.068: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  2 23:07:37.303: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct  2 23:07:37.794: INFO: Waiting for pod pod-host-path-test to disappear
Oct  2 23:07:38.029: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.297 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:07:38.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
Oct  2 23:07:37.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct  2 23:07:38.676: INFO: Waiting up to 5m0s for pod "pod-85c369f5-3f5e-4b5a-9416-a055cb08055f" in namespace "emptydir-9655" to be "Succeeded or Failed"
Oct  2 23:07:38.927: INFO: Pod "pod-85c369f5-3f5e-4b5a-9416-a055cb08055f": Phase="Pending", Reason="", readiness=false. Elapsed: 251.041902ms
Oct  2 23:07:41.180: INFO: Pod "pod-85c369f5-3f5e-4b5a-9416-a055cb08055f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.504429298s
STEP: Saw pod success
Oct  2 23:07:41.180: INFO: Pod "pod-85c369f5-3f5e-4b5a-9416-a055cb08055f" satisfied condition "Succeeded or Failed"
Oct  2 23:07:41.431: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-85c369f5-3f5e-4b5a-9416-a055cb08055f container test-container: <nil>
STEP: delete the pod
Oct  2 23:07:41.940: INFO: Waiting for pod pod-85c369f5-3f5e-4b5a-9416-a055cb08055f to disappear
Oct  2 23:07:42.191: INFO: Pod pod-85c369f5-3f5e-4b5a-9416-a055cb08055f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.526 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:42.718: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 30 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Oct  2 23:07:29.890: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  2 23:07:30.360: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-n6bt
STEP: Creating a pod to test subpath
Oct  2 23:07:30.605: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-n6bt" in namespace "provisioning-1988" to be "Succeeded or Failed"
Oct  2 23:07:30.840: INFO: Pod "pod-subpath-test-inlinevolume-n6bt": Phase="Pending", Reason="", readiness=false. Elapsed: 235.412255ms
Oct  2 23:07:33.077: INFO: Pod "pod-subpath-test-inlinevolume-n6bt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471670742s
Oct  2 23:07:35.313: INFO: Pod "pod-subpath-test-inlinevolume-n6bt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.70771705s
Oct  2 23:07:37.550: INFO: Pod "pod-subpath-test-inlinevolume-n6bt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.944686596s
Oct  2 23:07:39.787: INFO: Pod "pod-subpath-test-inlinevolume-n6bt": Phase="Pending", Reason="", readiness=false. Elapsed: 9.181709785s
Oct  2 23:07:42.024: INFO: Pod "pod-subpath-test-inlinevolume-n6bt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.419373825s
STEP: Saw pod success
Oct  2 23:07:42.024: INFO: Pod "pod-subpath-test-inlinevolume-n6bt" satisfied condition "Succeeded or Failed"
Oct  2 23:07:42.260: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-n6bt container test-container-subpath-inlinevolume-n6bt: <nil>
STEP: delete the pod
Oct  2 23:07:42.738: INFO: Waiting for pod pod-subpath-test-inlinevolume-n6bt to disappear
Oct  2 23:07:42.973: INFO: Pod pod-subpath-test-inlinevolume-n6bt no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-n6bt
Oct  2 23:07:42.973: INFO: Deleting pod "pod-subpath-test-inlinevolume-n6bt" in namespace "provisioning-1988"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:43.949: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:07:35.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
• [SLOW TEST:9.523 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 41 lines ...
• [SLOW TEST:17.207 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:45.768: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 14 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:244
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:07:41.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
• [SLOW TEST:7.099 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:07:43.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct  2 23:07:45.391: INFO: Waiting up to 5m0s for pod "pod-62aa1956-5edd-4e80-b4b5-6a9ac4ccea16" in namespace "emptydir-3091" to be "Succeeded or Failed"
Oct  2 23:07:45.638: INFO: Pod "pod-62aa1956-5edd-4e80-b4b5-6a9ac4ccea16": Phase="Pending", Reason="", readiness=false. Elapsed: 246.755154ms
Oct  2 23:07:47.883: INFO: Pod "pod-62aa1956-5edd-4e80-b4b5-6a9ac4ccea16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.492272382s
STEP: Saw pod success
Oct  2 23:07:47.883: INFO: Pod "pod-62aa1956-5edd-4e80-b4b5-6a9ac4ccea16" satisfied condition "Succeeded or Failed"
Oct  2 23:07:48.119: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-62aa1956-5edd-4e80-b4b5-6a9ac4ccea16 container test-container: <nil>
STEP: delete the pod
Oct  2 23:07:48.601: INFO: Waiting for pod pod-62aa1956-5edd-4e80-b4b5-6a9ac4ccea16 to disappear
Oct  2 23:07:48.836: INFO: Pod pod-62aa1956-5edd-4e80-b4b5-6a9ac4ccea16 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.344 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:49.348: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct  2 23:07:46.512: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1d4197e1-d0c7-44c5-9e91-8c1e21a86006" in namespace "security-context-test-4305" to be "Succeeded or Failed"
Oct  2 23:07:46.756: INFO: Pod "busybox-user-65534-1d4197e1-d0c7-44c5-9e91-8c1e21a86006": Phase="Pending", Reason="", readiness=false. Elapsed: 244.543785ms
Oct  2 23:07:49.036: INFO: Pod "busybox-user-65534-1d4197e1-d0c7-44c5-9e91-8c1e21a86006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.524188691s
Oct  2 23:07:49.036: INFO: Pod "busybox-user-65534-1d4197e1-d0c7-44c5-9e91-8c1e21a86006" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:07:49.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4305" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 4 lines ...
W1002 23:07:29.446300    5435 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct  2 23:07:29.446: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Oct  2 23:07:29.930: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  2 23:07:30.665: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-618" in namespace "provisioning-618" to be "Succeeded or Failed"
Oct  2 23:07:30.907: INFO: Pod "hostpath-symlink-prep-provisioning-618": Phase="Pending", Reason="", readiness=false. Elapsed: 241.869664ms
Oct  2 23:07:33.150: INFO: Pod "hostpath-symlink-prep-provisioning-618": Phase="Pending", Reason="", readiness=false. Elapsed: 2.484955091s
Oct  2 23:07:35.393: INFO: Pod "hostpath-symlink-prep-provisioning-618": Phase="Pending", Reason="", readiness=false. Elapsed: 4.728017159s
Oct  2 23:07:37.637: INFO: Pod "hostpath-symlink-prep-provisioning-618": Phase="Pending", Reason="", readiness=false. Elapsed: 6.971842735s
Oct  2 23:07:39.880: INFO: Pod "hostpath-symlink-prep-provisioning-618": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.214673064s
STEP: Saw pod success
Oct  2 23:07:39.880: INFO: Pod "hostpath-symlink-prep-provisioning-618" satisfied condition "Succeeded or Failed"
Oct  2 23:07:39.880: INFO: Deleting pod "hostpath-symlink-prep-provisioning-618" in namespace "provisioning-618"
Oct  2 23:07:40.166: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-618" to be fully deleted
Oct  2 23:07:40.408: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-wcf6
STEP: Creating a pod to test subpath
Oct  2 23:07:40.653: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-wcf6" in namespace "provisioning-618" to be "Succeeded or Failed"
Oct  2 23:07:40.895: INFO: Pod "pod-subpath-test-inlinevolume-wcf6": Phase="Pending", Reason="", readiness=false. Elapsed: 242.338444ms
Oct  2 23:07:43.138: INFO: Pod "pod-subpath-test-inlinevolume-wcf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485147342s
Oct  2 23:07:45.381: INFO: Pod "pod-subpath-test-inlinevolume-wcf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.727838461s
STEP: Saw pod success
Oct  2 23:07:45.381: INFO: Pod "pod-subpath-test-inlinevolume-wcf6" satisfied condition "Succeeded or Failed"
Oct  2 23:07:45.626: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-wcf6 container test-container-subpath-inlinevolume-wcf6: <nil>
STEP: delete the pod
Oct  2 23:07:46.149: INFO: Waiting for pod pod-subpath-test-inlinevolume-wcf6 to disappear
Oct  2 23:07:46.392: INFO: Pod pod-subpath-test-inlinevolume-wcf6 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-wcf6
Oct  2 23:07:46.392: INFO: Deleting pod "pod-subpath-test-inlinevolume-wcf6" in namespace "provisioning-618"
STEP: Deleting pod
Oct  2 23:07:46.634: INFO: Deleting pod "pod-subpath-test-inlinevolume-wcf6" in namespace "provisioning-618"
Oct  2 23:07:47.119: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-618" in namespace "provisioning-618" to be "Succeeded or Failed"
Oct  2 23:07:47.362: INFO: Pod "hostpath-symlink-prep-provisioning-618": Phase="Pending", Reason="", readiness=false. Elapsed: 242.682653ms
Oct  2 23:07:49.605: INFO: Pod "hostpath-symlink-prep-provisioning-618": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.485648782s
STEP: Saw pod success
Oct  2 23:07:49.605: INFO: Pod "hostpath-symlink-prep-provisioning-618" satisfied condition "Succeeded or Failed"
Oct  2 23:07:49.605: INFO: Deleting pod "hostpath-symlink-prep-provisioning-618" in namespace "provisioning-618"
Oct  2 23:07:49.851: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-618" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:07:50.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-618" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:50.624: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 103 lines ...
Oct  2 23:07:30.869: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-b9041a94-2fc3-4f1e-9eaf-e207c39e6775
STEP: Creating a pod to test consume configMaps
Oct  2 23:07:31.818: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33" in namespace "configmap-3954" to be "Succeeded or Failed"
Oct  2 23:07:32.055: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 236.878974ms
Oct  2 23:07:34.293: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474861201s
Oct  2 23:07:36.531: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.713285809s
Oct  2 23:07:38.770: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.952236597s
Oct  2 23:07:41.008: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 9.190255316s
Oct  2 23:07:43.247: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 11.429101714s
Oct  2 23:07:45.491: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 13.672507763s
Oct  2 23:07:47.728: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33": Phase="Pending", Reason="", readiness=false. Elapsed: 15.909906162s
Oct  2 23:07:49.973: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.154903721s
STEP: Saw pod success
Oct  2 23:07:49.973: INFO: Pod "pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33" satisfied condition "Succeeded or Failed"
Oct  2 23:07:50.210: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:07:50.739: INFO: Waiting for pod pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33 to disappear
Oct  2 23:07:50.982: INFO: Pod pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:23.142 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":19,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:51.775: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 104 lines ...
• [SLOW TEST:14.323 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:57.123: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 90 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:07:52.222: INFO: Waiting up to 5m0s for pod "metadata-volume-0450555f-b9fc-4707-8614-f312d309c30c" in namespace "projected-99" to be "Succeeded or Failed"
Oct  2 23:07:52.464: INFO: Pod "metadata-volume-0450555f-b9fc-4707-8614-f312d309c30c": Phase="Pending", Reason="", readiness=false. Elapsed: 242.458993ms
Oct  2 23:07:54.707: INFO: Pod "metadata-volume-0450555f-b9fc-4707-8614-f312d309c30c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485618451s
Oct  2 23:07:56.951: INFO: Pod "metadata-volume-0450555f-b9fc-4707-8614-f312d309c30c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.728936771s
STEP: Saw pod success
Oct  2 23:07:56.951: INFO: Pod "metadata-volume-0450555f-b9fc-4707-8614-f312d309c30c" satisfied condition "Succeeded or Failed"
Oct  2 23:07:57.193: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod metadata-volume-0450555f-b9fc-4707-8614-f312d309c30c container client-container: <nil>
STEP: delete the pod
Oct  2 23:07:57.684: INFO: Waiting for pod metadata-volume-0450555f-b9fc-4707-8614-f312d309c30c to disappear
Oct  2 23:07:57.926: INFO: Pod metadata-volume-0450555f-b9fc-4707-8614-f312d309c30c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.667 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:07:58.428: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 127 lines ...
Oct  2 23:07:48.841: INFO: PersistentVolumeClaim pvc-gqqdw found but phase is Pending instead of Bound.
Oct  2 23:07:51.093: INFO: PersistentVolumeClaim pvc-gqqdw found and phase=Bound (4.742359541s)
Oct  2 23:07:51.093: INFO: Waiting up to 3m0s for PersistentVolume local-tt5mn to have phase Bound
Oct  2 23:07:51.396: INFO: PersistentVolume local-tt5mn found and phase=Bound (302.85203ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-459q
STEP: Creating a pod to test exec-volume-test
Oct  2 23:07:52.161: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-459q" in namespace "volume-542" to be "Succeeded or Failed"
Oct  2 23:07:52.409: INFO: Pod "exec-volume-test-preprovisionedpv-459q": Phase="Pending", Reason="", readiness=false. Elapsed: 247.205203ms
Oct  2 23:07:54.655: INFO: Pod "exec-volume-test-preprovisionedpv-459q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.493472841s
STEP: Saw pod success
Oct  2 23:07:54.655: INFO: Pod "exec-volume-test-preprovisionedpv-459q" satisfied condition "Succeeded or Failed"
Oct  2 23:07:54.900: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-459q container exec-container-preprovisionedpv-459q: <nil>
STEP: delete the pod
Oct  2 23:07:55.405: INFO: Waiting for pod exec-volume-test-preprovisionedpv-459q to disappear
Oct  2 23:07:55.670: INFO: Pod exec-volume-test-preprovisionedpv-459q no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-459q
Oct  2 23:07:55.670: INFO: Deleting pod "exec-volume-test-preprovisionedpv-459q" in namespace "volume-542"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":19,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:07:58.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d35db762-e151-4670-acfa-ea0d714f9032" in namespace "projected-3138" to be "Succeeded or Failed"
Oct  2 23:07:58.595: INFO: Pod "downwardapi-volume-d35db762-e151-4670-acfa-ea0d714f9032": Phase="Pending", Reason="", readiness=false. Elapsed: 242.227794ms
Oct  2 23:08:00.836: INFO: Pod "downwardapi-volume-d35db762-e151-4670-acfa-ea0d714f9032": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483243672s
Oct  2 23:08:03.076: INFO: Pod "downwardapi-volume-d35db762-e151-4670-acfa-ea0d714f9032": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.722832433s
STEP: Saw pod success
Oct  2 23:08:03.076: INFO: Pod "downwardapi-volume-d35db762-e151-4670-acfa-ea0d714f9032" satisfied condition "Succeeded or Failed"
Oct  2 23:08:03.313: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod downwardapi-volume-d35db762-e151-4670-acfa-ea0d714f9032 container client-container: <nil>
STEP: delete the pod
Oct  2 23:08:03.797: INFO: Waiting for pod downwardapi-volume-d35db762-e151-4670-acfa-ea0d714f9032 to disappear
Oct  2 23:08:04.035: INFO: Pod downwardapi-volume-d35db762-e151-4670-acfa-ea0d714f9032 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.615 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:04.568: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 122 lines ...
• [SLOW TEST:37.241 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:05.754: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 77 lines ...
• [SLOW TEST:37.605 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:06.222: INFO: Only supported for providers [openstack] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 332 lines ...
Oct  2 23:07:53.692: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5519 create -f -'
Oct  2 23:07:55.760: INFO: stderr: ""
Oct  2 23:07:55.760: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Oct  2 23:07:55.760: INFO: Waiting for all frontend pods to be Running.
Oct  2 23:07:56.011: INFO: Waiting for frontend to serve content.
Oct  2 23:07:57.264: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct  2 23:08:02.529: INFO: Trying to add a new entry to the guestbook.
Oct  2 23:08:02.778: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Oct  2 23:08:03.037: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5519 delete --grace-period=0 --force -f -'
Oct  2 23:08:04.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  2 23:08:04.147: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
... skipping 28 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] KubeletManagedEtcHosts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 63 lines ...
• [SLOW TEST:42.901 seconds]
[sig-node] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:11.444: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
• [SLOW TEST:43.985 seconds]
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:12.423: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
• [SLOW TEST:44.938 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:406
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":1,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:13.511: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-2566/secret-test-77c3e461-acff-45e5-90f9-f20157587f9b
STEP: Creating a pod to test consume secrets
Oct  2 23:08:05.765: INFO: Waiting up to 5m0s for pod "pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c" in namespace "secrets-2566" to be "Succeeded or Failed"
Oct  2 23:08:06.000: INFO: Pod "pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 234.996913ms
Oct  2 23:08:08.243: INFO: Pod "pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478175263s
Oct  2 23:08:10.479: INFO: Pod "pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.714266243s
Oct  2 23:08:12.724: INFO: Pod "pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959039863s
Oct  2 23:08:14.962: INFO: Pod "pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.196471094s
STEP: Saw pod success
Oct  2 23:08:14.962: INFO: Pod "pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c" satisfied condition "Succeeded or Failed"
Oct  2 23:08:15.199: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c container env-test: <nil>
STEP: delete the pod
Oct  2 23:08:15.676: INFO: Waiting for pod pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c to disappear
Oct  2 23:08:15.911: INFO: Pod pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.313 seconds]
[sig-node] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":2,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:08:13.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f8a3ff0-ae69-48e2-b3e7-5394e1f64cf1" in namespace "downward-api-4388" to be "Succeeded or Failed"
Oct  2 23:08:14.157: INFO: Pod "downwardapi-volume-5f8a3ff0-ae69-48e2-b3e7-5394e1f64cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 243.282284ms
Oct  2 23:08:16.401: INFO: Pod "downwardapi-volume-5f8a3ff0-ae69-48e2-b3e7-5394e1f64cf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.487066003s
STEP: Saw pod success
Oct  2 23:08:16.401: INFO: Pod "downwardapi-volume-5f8a3ff0-ae69-48e2-b3e7-5394e1f64cf1" satisfied condition "Succeeded or Failed"
Oct  2 23:08:16.644: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod downwardapi-volume-5f8a3ff0-ae69-48e2-b3e7-5394e1f64cf1 container client-container: <nil>
STEP: delete the pod
Oct  2 23:08:17.137: INFO: Waiting for pod downwardapi-volume-5f8a3ff0-ae69-48e2-b3e7-5394e1f64cf1 to disappear
Oct  2 23:08:17.380: INFO: Pod downwardapi-volume-5f8a3ff0-ae69-48e2-b3e7-5394e1f64cf1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.433 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 70 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:391
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:21.745: INFO: Only supported for providers [vsphere] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:08:18.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a64846d4-358a-4993-81e3-62ef140463fc" in namespace "projected-264" to be "Succeeded or Failed"
Oct  2 23:08:18.697: INFO: Pod "downwardapi-volume-a64846d4-358a-4993-81e3-62ef140463fc": Phase="Pending", Reason="", readiness=false. Elapsed: 250.832203ms
Oct  2 23:08:20.948: INFO: Pod "downwardapi-volume-a64846d4-358a-4993-81e3-62ef140463fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.501395914s
STEP: Saw pod success
Oct  2 23:08:20.948: INFO: Pod "downwardapi-volume-a64846d4-358a-4993-81e3-62ef140463fc" satisfied condition "Succeeded or Failed"
Oct  2 23:08:21.206: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod downwardapi-volume-a64846d4-358a-4993-81e3-62ef140463fc container client-container: <nil>
STEP: delete the pod
Oct  2 23:08:21.725: INFO: Waiting for pod downwardapi-volume-a64846d4-358a-4993-81e3-62ef140463fc to disappear
Oct  2 23:08:21.981: INFO: Pod downwardapi-volume-a64846d4-358a-4993-81e3-62ef140463fc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.526 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct  2 23:08:14.795: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  2 23:08:14.795: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-rwd2
STEP: Creating a pod to test subpath
Oct  2 23:08:15.045: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-rwd2" in namespace "provisioning-8876" to be "Succeeded or Failed"
Oct  2 23:08:15.294: INFO: Pod "pod-subpath-test-inlinevolume-rwd2": Phase="Pending", Reason="", readiness=false. Elapsed: 248.429674ms
Oct  2 23:08:17.546: INFO: Pod "pod-subpath-test-inlinevolume-rwd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500411791s
Oct  2 23:08:19.794: INFO: Pod "pod-subpath-test-inlinevolume-rwd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.748560353s
Oct  2 23:08:22.044: INFO: Pod "pod-subpath-test-inlinevolume-rwd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.998779743s
Oct  2 23:08:24.294: INFO: Pod "pod-subpath-test-inlinevolume-rwd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.247976064s
STEP: Saw pod success
Oct  2 23:08:24.294: INFO: Pod "pod-subpath-test-inlinevolume-rwd2" satisfied condition "Succeeded or Failed"
Oct  2 23:08:24.542: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-rwd2 container test-container-volume-inlinevolume-rwd2: <nil>
STEP: delete the pod
Oct  2 23:08:25.049: INFO: Waiting for pod pod-subpath-test-inlinevolume-rwd2 to disappear
Oct  2 23:08:25.297: INFO: Pod pod-subpath-test-inlinevolume-rwd2 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-rwd2
Oct  2 23:08:25.297: INFO: Deleting pod "pod-subpath-test-inlinevolume-rwd2" in namespace "provisioning-8876"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:26.319: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
Oct  2 23:08:21.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  2 23:08:23.232: INFO: Waiting up to 5m0s for pod "downward-api-62d3f8f7-7bf6-4ae3-b07f-472e89351603" in namespace "downward-api-6733" to be "Succeeded or Failed"
Oct  2 23:08:23.473: INFO: Pod "downward-api-62d3f8f7-7bf6-4ae3-b07f-472e89351603": Phase="Pending", Reason="", readiness=false. Elapsed: 241.565784ms
Oct  2 23:08:25.717: INFO: Pod "downward-api-62d3f8f7-7bf6-4ae3-b07f-472e89351603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.485210206s
STEP: Saw pod success
Oct  2 23:08:25.717: INFO: Pod "downward-api-62d3f8f7-7bf6-4ae3-b07f-472e89351603" satisfied condition "Succeeded or Failed"
Oct  2 23:08:25.967: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod downward-api-62d3f8f7-7bf6-4ae3-b07f-472e89351603 container dapi-container: <nil>
STEP: delete the pod
Oct  2 23:08:26.470: INFO: Waiting for pod downward-api-62d3f8f7-7bf6-4ae3-b07f-472e89351603 to disappear
Oct  2 23:08:26.720: INFO: Pod downward-api-62d3f8f7-7bf6-4ae3-b07f-472e89351603 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.448 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:27.245: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 37 lines ...
      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:08:07.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
• [SLOW TEST:19.928 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:27.648: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct  2 23:08:24.007: INFO: Waiting up to 5m0s for pod "pod-00106aac-7aed-4b1c-8d6c-efcb6ea58f8c" in namespace "emptydir-3609" to be "Succeeded or Failed"
Oct  2 23:08:24.252: INFO: Pod "pod-00106aac-7aed-4b1c-8d6c-efcb6ea58f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 245.114615ms
Oct  2 23:08:26.499: INFO: Pod "pod-00106aac-7aed-4b1c-8d6c-efcb6ea58f8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.491720426s
STEP: Saw pod success
Oct  2 23:08:26.499: INFO: Pod "pod-00106aac-7aed-4b1c-8d6c-efcb6ea58f8c" satisfied condition "Succeeded or Failed"
Oct  2 23:08:26.745: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-00106aac-7aed-4b1c-8d6c-efcb6ea58f8c container test-container: <nil>
STEP: delete the pod
Oct  2 23:08:27.241: INFO: Waiting for pod pod-00106aac-7aed-4b1c-8d6c-efcb6ea58f8c to disappear
Oct  2 23:08:27.486: INFO: Pod pod-00106aac-7aed-4b1c-8d6c-efcb6ea58f8c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is non-root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":4,"skipped":36,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:28.019: INFO: Only supported for providers [gce gke] (not aws)
... skipping 494 lines ...
• [SLOW TEST:19.611 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:08:06.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 82 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:08:29.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8657" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:30.304: INFO: Only supported for providers [openstack] (not aws)
... skipping 14 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:08:28.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:08:32.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6312" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:08:26.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-a29692ec-dfa2-4edc-a8bb-233fc6d7dffa
STEP: Creating a pod to test consume secrets
Oct  2 23:08:28.141: INFO: Waiting up to 5m0s for pod "pod-secrets-5fce45fb-b04e-4eee-ab08-96c5759dd0a6" in namespace "secrets-8328" to be "Succeeded or Failed"
Oct  2 23:08:28.390: INFO: Pod "pod-secrets-5fce45fb-b04e-4eee-ab08-96c5759dd0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 248.686793ms
Oct  2 23:08:30.638: INFO: Pod "pod-secrets-5fce45fb-b04e-4eee-ab08-96c5759dd0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.497095404s
Oct  2 23:08:32.888: INFO: Pod "pod-secrets-5fce45fb-b04e-4eee-ab08-96c5759dd0a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.746415026s
STEP: Saw pod success
Oct  2 23:08:32.888: INFO: Pod "pod-secrets-5fce45fb-b04e-4eee-ab08-96c5759dd0a6" satisfied condition "Succeeded or Failed"
Oct  2 23:08:33.135: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-secrets-5fce45fb-b04e-4eee-ab08-96c5759dd0a6 container secret-volume-test: <nil>
STEP: delete the pod
Oct  2 23:08:33.642: INFO: Waiting for pod pod-secrets-5fce45fb-b04e-4eee-ab08-96c5759dd0a6 to disappear
Oct  2 23:08:33.890: INFO: Pod pod-secrets-5fce45fb-b04e-4eee-ab08-96c5759dd0a6 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.991 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:34.413: INFO: Only supported for providers [azure] (not aws)
... skipping 22 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-262ede70-99b7-440f-8b80-ba9b07f6c51c
STEP: Creating a pod to test consume configMaps
Oct  2 23:08:28.993: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276" in namespace "configmap-2708" to be "Succeeded or Failed"
Oct  2 23:08:29.235: INFO: Pod "pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276": Phase="Pending", Reason="", readiness=false. Elapsed: 241.989084ms
Oct  2 23:08:31.480: INFO: Pod "pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486332856s
Oct  2 23:08:33.723: INFO: Pod "pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276": Phase="Running", Reason="", readiness=true. Elapsed: 4.729830127s
Oct  2 23:08:35.975: INFO: Pod "pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.981607448s
STEP: Saw pod success
Oct  2 23:08:35.975: INFO: Pod "pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276" satisfied condition "Succeeded or Failed"
Oct  2 23:08:36.220: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:08:36.721: INFO: Waiting for pod pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276 to disappear
Oct  2 23:08:36.965: INFO: Pod pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.160 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:37.513: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 96 lines ...
Oct  2 23:08:03.596: INFO: PersistentVolumeClaim pvc-qnsff found but phase is Pending instead of Bound.
Oct  2 23:08:05.844: INFO: PersistentVolumeClaim pvc-qnsff found and phase=Bound (13.741466464s)
Oct  2 23:08:05.845: INFO: Waiting up to 3m0s for PersistentVolume local-ksk9b to have phase Bound
Oct  2 23:08:06.092: INFO: PersistentVolume local-ksk9b found and phase=Bound (247.758173ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nzmf
STEP: Creating a pod to test atomic-volume-subpath
Oct  2 23:08:06.841: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nzmf" in namespace "provisioning-4732" to be "Succeeded or Failed"
Oct  2 23:08:07.089: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Pending", Reason="", readiness=false. Elapsed: 248.721994ms
Oct  2 23:08:09.337: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Running", Reason="", readiness=true. Elapsed: 2.496556543s
Oct  2 23:08:11.586: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Running", Reason="", readiness=true. Elapsed: 4.745676312s
Oct  2 23:08:13.835: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Running", Reason="", readiness=true. Elapsed: 6.993864512s
Oct  2 23:08:16.084: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Running", Reason="", readiness=true. Elapsed: 9.242897521s
Oct  2 23:08:18.333: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Running", Reason="", readiness=true. Elapsed: 11.492289332s
Oct  2 23:08:20.582: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Running", Reason="", readiness=true. Elapsed: 13.740956721s
Oct  2 23:08:22.830: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Running", Reason="", readiness=true. Elapsed: 15.989490992s
Oct  2 23:08:25.078: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Running", Reason="", readiness=true. Elapsed: 18.237784553s
Oct  2 23:08:27.328: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Running", Reason="", readiness=true. Elapsed: 20.486953334s
Oct  2 23:08:29.576: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.734953074s
STEP: Saw pod success
Oct  2 23:08:29.576: INFO: Pod "pod-subpath-test-preprovisionedpv-nzmf" satisfied condition "Succeeded or Failed"
Oct  2 23:08:29.823: INFO: Trying to get logs from node ip-172-20-40-74.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-nzmf container test-container-subpath-preprovisionedpv-nzmf: <nil>
STEP: delete the pod
Oct  2 23:08:30.347: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nzmf to disappear
Oct  2 23:08:30.594: INFO: Pod pod-subpath-test-preprovisionedpv-nzmf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nzmf
Oct  2 23:08:30.594: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nzmf" in namespace "provisioning-4732"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
STEP: Deleting pod verify-service-up-exec-pod-688hz in namespace services-2362
STEP: verifying service-headless is not up
Oct  2 23:07:55.272: INFO: Creating new host exec pod
Oct  2 23:07:55.813: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:07:58.051: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:08:00.079: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct  2 23:08:00.079: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2362 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.142.187:80 && echo service-down-failed'
Oct  2 23:08:04.391: INFO: rc: 28
Oct  2 23:08:04.391: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.142.187:80 && echo service-down-failed" in pod services-2362/verify-service-down-host-exec-pod: error running /tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2362 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.142.187:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.68.142.187:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2362
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Oct  2 23:08:05.153: INFO: Creating new host exec pod
Oct  2 23:08:05.631: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:08:07.870: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:08:09.872: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:08:11.870: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:08:13.869: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct  2 23:08:13.869: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2362 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.202.181:80 && echo service-down-failed'
Oct  2 23:08:18.220: INFO: rc: 28
Oct  2 23:08:18.221: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.64.202.181:80 && echo service-down-failed" in pod services-2362/verify-service-down-host-exec-pod: error running /tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2362 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.202.181:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.64.202.181:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2362
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Oct  2 23:08:18.951: INFO: Creating new host exec pod
... skipping 13 lines ...
STEP: Deleting pod verify-service-up-exec-pod-95f58 in namespace services-2362
STEP: verifying service-headless is still not up
Oct  2 23:08:30.047: INFO: Creating new host exec pod
Oct  2 23:08:30.521: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:08:32.759: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:08:34.766: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct  2 23:08:34.766: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2362 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.142.187:80 && echo service-down-failed'
Oct  2 23:08:39.136: INFO: rc: 28
Oct  2 23:08:39.137: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.142.187:80 && echo service-down-failed" in pod services-2362/verify-service-down-host-exec-pod: error running /tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2362 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.142.187:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.68.142.187:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2362
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:08:39.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:71.666 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1937
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 67 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":30,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:40.173: INFO: Only supported for providers [vsphere] (not aws)
... skipping 39 lines ...
• [SLOW TEST:51.995 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1050
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":4,"skipped":13,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: Destroying namespace "apply-8204" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":4,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:14.420 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:480
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":4,"skipped":20,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:44.205: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:50.067: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 96 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470
    should add annotations for pods in rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:50.498: INFO: Only supported for providers [vsphere] (not aws)
... skipping 142 lines ...
Oct  2 23:08:07.254: INFO: PersistentVolumeClaim csi-hostpathxwgbd found but phase is Pending instead of Bound.
Oct  2 23:08:09.491: INFO: PersistentVolumeClaim csi-hostpathxwgbd found but phase is Pending instead of Bound.
Oct  2 23:08:11.729: INFO: PersistentVolumeClaim csi-hostpathxwgbd found but phase is Pending instead of Bound.
Oct  2 23:08:13.967: INFO: PersistentVolumeClaim csi-hostpathxwgbd found and phase=Bound (31.612480304s)
STEP: Creating pod pod-subpath-test-dynamicpv-ppjr
STEP: Creating a pod to test subpath
Oct  2 23:08:14.681: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ppjr" in namespace "provisioning-6507" to be "Succeeded or Failed"
Oct  2 23:08:14.919: INFO: Pod "pod-subpath-test-dynamicpv-ppjr": Phase="Pending", Reason="", readiness=false. Elapsed: 237.532774ms
Oct  2 23:08:17.161: INFO: Pod "pod-subpath-test-dynamicpv-ppjr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479436823s
Oct  2 23:08:19.399: INFO: Pod "pod-subpath-test-dynamicpv-ppjr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.717431394s
Oct  2 23:08:21.644: INFO: Pod "pod-subpath-test-dynamicpv-ppjr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.962835474s
Oct  2 23:08:23.882: INFO: Pod "pod-subpath-test-dynamicpv-ppjr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200343236s
Oct  2 23:08:26.123: INFO: Pod "pod-subpath-test-dynamicpv-ppjr": Phase="Pending", Reason="", readiness=false. Elapsed: 11.441737968s
Oct  2 23:08:28.362: INFO: Pod "pod-subpath-test-dynamicpv-ppjr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.680162139s
STEP: Saw pod success
Oct  2 23:08:28.362: INFO: Pod "pod-subpath-test-dynamicpv-ppjr" satisfied condition "Succeeded or Failed"
Oct  2 23:08:28.599: INFO: Trying to get logs from node ip-172-20-40-74.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-ppjr container test-container-subpath-dynamicpv-ppjr: <nil>
STEP: delete the pod
Oct  2 23:08:29.104: INFO: Waiting for pod pod-subpath-test-dynamicpv-ppjr to disappear
Oct  2 23:08:29.341: INFO: Pod pod-subpath-test-dynamicpv-ppjr no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ppjr
Oct  2 23:08:29.341: INFO: Deleting pod "pod-subpath-test-dynamicpv-ppjr" in namespace "provisioning-6507"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:53.380: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
• [SLOW TEST:15.009 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:08:53.692: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 71 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-e6d271ee-20fb-425e-9fe1-5efcabbb26db
STEP: Creating a pod to test consume configMaps
Oct  2 23:08:55.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f0047ac-78d5-4c89-bf4c-c547e447da31" in namespace "configmap-1649" to be "Succeeded or Failed"
Oct  2 23:08:55.838: INFO: Pod "pod-configmaps-2f0047ac-78d5-4c89-bf4c-c547e447da31": Phase="Pending", Reason="", readiness=false. Elapsed: 247.116812ms
Oct  2 23:08:58.086: INFO: Pod "pod-configmaps-2f0047ac-78d5-4c89-bf4c-c547e447da31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.495282126s
Oct  2 23:09:00.335: INFO: Pod "pod-configmaps-2f0047ac-78d5-4c89-bf4c-c547e447da31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.744402778s
STEP: Saw pod success
Oct  2 23:09:00.335: INFO: Pod "pod-configmaps-2f0047ac-78d5-4c89-bf4c-c547e447da31" satisfied condition "Succeeded or Failed"
Oct  2 23:09:00.582: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-configmaps-2f0047ac-78d5-4c89-bf4c-c547e447da31 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:09:01.082: INFO: Waiting for pod pod-configmaps-2f0047ac-78d5-4c89-bf4c-c547e447da31 to disappear
Oct  2 23:09:01.330: INFO: Pod pod-configmaps-2f0047ac-78d5-4c89-bf4c-c547e447da31 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.976 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
Oct  2 23:07:54.842: INFO: PersistentVolume nfs-xrqmh found and phase=Bound (242.958624ms)
Oct  2 23:07:55.084: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cxfm4] to have phase Bound
Oct  2 23:07:55.328: INFO: PersistentVolumeClaim pvc-cxfm4 found and phase=Bound (243.239803ms)
STEP: Checking pod has write access to PersistentVolumes
Oct  2 23:07:55.580: INFO: Creating nfs test pod
Oct  2 23:07:55.833: INFO: Pod should terminate with exitcode 0 (success)
Oct  2 23:07:55.833: INFO: Waiting up to 5m0s for pod "pvc-tester-p4s7s" in namespace "pv-2483" to be "Succeeded or Failed"
Oct  2 23:07:56.076: INFO: Pod "pvc-tester-p4s7s": Phase="Pending", Reason="", readiness=false. Elapsed: 243.218904ms
Oct  2 23:07:58.320: INFO: Pod "pvc-tester-p4s7s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.487530133s
Oct  2 23:08:00.564: INFO: Pod "pvc-tester-p4s7s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.731564062s
STEP: Saw pod success
Oct  2 23:08:00.565: INFO: Pod "pvc-tester-p4s7s" satisfied condition "Succeeded or Failed"
Oct  2 23:08:00.565: INFO: Pod pvc-tester-p4s7s succeeded 
Oct  2 23:08:00.565: INFO: Deleting pod "pvc-tester-p4s7s" in namespace "pv-2483"
Oct  2 23:08:00.811: INFO: Wait up to 5m0s for pod "pvc-tester-p4s7s" to be fully deleted
Oct  2 23:08:01.314: INFO: Creating nfs test pod
Oct  2 23:08:01.573: INFO: Pod should terminate with exitcode 0 (success)
Oct  2 23:08:01.573: INFO: Waiting up to 5m0s for pod "pvc-tester-gvp4z" in namespace "pv-2483" to be "Succeeded or Failed"
Oct  2 23:08:01.821: INFO: Pod "pvc-tester-gvp4z": Phase="Pending", Reason="", readiness=false. Elapsed: 248.342303ms
Oct  2 23:08:04.067: INFO: Pod "pvc-tester-gvp4z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494246141s
Oct  2 23:08:06.311: INFO: Pod "pvc-tester-gvp4z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.738414121s
Oct  2 23:08:08.566: INFO: Pod "pvc-tester-gvp4z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.993100949s
Oct  2 23:08:10.811: INFO: Pod "pvc-tester-gvp4z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.238203648s
STEP: Saw pod success
Oct  2 23:08:10.811: INFO: Pod "pvc-tester-gvp4z" satisfied condition "Succeeded or Failed"
Oct  2 23:08:10.811: INFO: Pod pvc-tester-gvp4z succeeded 
Oct  2 23:08:10.811: INFO: Deleting pod "pvc-tester-gvp4z" in namespace "pv-2483"
Oct  2 23:08:11.060: INFO: Wait up to 5m0s for pod "pvc-tester-gvp4z" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Oct  2 23:08:12.280: INFO: Deleting PVC pvc-8mbnz to trigger reclamation of PV nfs-tkgw7
Oct  2 23:08:12.280: INFO: Deleting PersistentVolumeClaim "pvc-8mbnz"
... skipping 49 lines ...
STEP: Destroying namespace "node-problem-detector-2702" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.738 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:09:02.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:09:04.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5807" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:04.906: INFO: Only supported for providers [gce gke] (not aws)
... skipping 122 lines ...
Oct  2 23:08:47.933: INFO: PersistentVolumeClaim pvc-hhxm8 found but phase is Pending instead of Bound.
Oct  2 23:08:50.184: INFO: PersistentVolumeClaim pvc-hhxm8 found and phase=Bound (11.53833223s)
Oct  2 23:08:50.184: INFO: Waiting up to 3m0s for PersistentVolume local-pqbm5 to have phase Bound
Oct  2 23:08:50.434: INFO: PersistentVolume local-pqbm5 found and phase=Bound (250.053984ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-d88s
STEP: Creating a pod to test subpath
Oct  2 23:08:51.189: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-d88s" in namespace "provisioning-3723" to be "Succeeded or Failed"
Oct  2 23:08:51.440: INFO: Pod "pod-subpath-test-preprovisionedpv-d88s": Phase="Pending", Reason="", readiness=false. Elapsed: 250.064624ms
Oct  2 23:08:53.691: INFO: Pod "pod-subpath-test-preprovisionedpv-d88s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.501777796s
Oct  2 23:08:55.943: INFO: Pod "pod-subpath-test-preprovisionedpv-d88s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.753604558s
Oct  2 23:08:58.195: INFO: Pod "pod-subpath-test-preprovisionedpv-d88s": Phase="Pending", Reason="", readiness=false. Elapsed: 7.00536545s
Oct  2 23:09:00.446: INFO: Pod "pod-subpath-test-preprovisionedpv-d88s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.256306174s
STEP: Saw pod success
Oct  2 23:09:00.446: INFO: Pod "pod-subpath-test-preprovisionedpv-d88s" satisfied condition "Succeeded or Failed"
Oct  2 23:09:00.696: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-d88s container test-container-subpath-preprovisionedpv-d88s: <nil>
STEP: delete the pod
Oct  2 23:09:01.209: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-d88s to disappear
Oct  2 23:09:01.459: INFO: Pod pod-subpath-test-preprovisionedpv-d88s no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-d88s
Oct  2 23:09:01.460: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-d88s" in namespace "provisioning-3723"
STEP: Creating pod pod-subpath-test-preprovisionedpv-d88s
STEP: Creating a pod to test subpath
Oct  2 23:09:01.962: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-d88s" in namespace "provisioning-3723" to be "Succeeded or Failed"
Oct  2 23:09:02.212: INFO: Pod "pod-subpath-test-preprovisionedpv-d88s": Phase="Pending", Reason="", readiness=false. Elapsed: 250.460454ms
Oct  2 23:09:04.464: INFO: Pod "pod-subpath-test-preprovisionedpv-d88s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.501713497s
STEP: Saw pod success
Oct  2 23:09:04.464: INFO: Pod "pod-subpath-test-preprovisionedpv-d88s" satisfied condition "Succeeded or Failed"
Oct  2 23:09:04.714: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-d88s container test-container-subpath-preprovisionedpv-d88s: <nil>
STEP: delete the pod
Oct  2 23:09:05.222: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-d88s to disappear
Oct  2 23:09:05.473: INFO: Pod pod-subpath-test-preprovisionedpv-d88s no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-d88s
Oct  2 23:09:05.473: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-d88s" in namespace "provisioning-3723"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:10.391: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
Oct  2 23:08:48.109: INFO: PersistentVolumeClaim pvc-8cr2x found but phase is Pending instead of Bound.
Oct  2 23:08:50.351: INFO: PersistentVolumeClaim pvc-8cr2x found and phase=Bound (2.485270617s)
Oct  2 23:08:50.351: INFO: Waiting up to 3m0s for PersistentVolume local-vjxls to have phase Bound
Oct  2 23:08:50.593: INFO: PersistentVolume local-vjxls found and phase=Bound (241.972334ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c64m
STEP: Creating a pod to test subpath
Oct  2 23:08:51.322: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c64m" in namespace "provisioning-3649" to be "Succeeded or Failed"
Oct  2 23:08:51.569: INFO: Pod "pod-subpath-test-preprovisionedpv-c64m": Phase="Pending", Reason="", readiness=false. Elapsed: 246.128944ms
Oct  2 23:08:53.812: INFO: Pod "pod-subpath-test-preprovisionedpv-c64m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489906347s
Oct  2 23:08:56.063: INFO: Pod "pod-subpath-test-preprovisionedpv-c64m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.740864169s
STEP: Saw pod success
Oct  2 23:08:56.063: INFO: Pod "pod-subpath-test-preprovisionedpv-c64m" satisfied condition "Succeeded or Failed"
Oct  2 23:08:56.308: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-c64m container test-container-subpath-preprovisionedpv-c64m: <nil>
STEP: delete the pod
Oct  2 23:08:56.801: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c64m to disappear
Oct  2 23:08:57.043: INFO: Pod pod-subpath-test-preprovisionedpv-c64m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c64m
Oct  2 23:08:57.043: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c64m" in namespace "provisioning-3649"
... skipping 9 lines ...
Oct  2 23:08:59.579: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-6f680dc8-5cee-4d8a-9782-11a81f93a22c] Namespace:provisioning-3649 PodName:hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pw7vp ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  2 23:08:59.579: INFO: >>> kubeConfig: /root/.kube/config
Oct  2 23:09:01.169: INFO: exec ip-172-20-34-88.ap-south-1.compute.internal: command:   rm -r /tmp/local-driver-6f680dc8-5cee-4d8a-9782-11a81f93a22c
Oct  2 23:09:01.169: INFO: exec ip-172-20-34-88.ap-south-1.compute.internal: stdout:    ""
Oct  2 23:09:01.169: INFO: exec ip-172-20-34-88.ap-south-1.compute.internal: stderr:    "rm: cannot remove '/tmp/local-driver-6f680dc8-5cee-4d8a-9782-11a81f93a22c': Device or resource busy\n"
Oct  2 23:09:01.169: INFO: exec ip-172-20-34-88.ap-south-1.compute.internal: exit code: 0
Oct  2 23:09:01.169: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "command terminated with exit code 1",
        },
        Code: 1,
    }
... skipping 26 lines ...
STEP: Deleting pod hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pw7vp in namespace provisioning-3649
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-3649".
STEP: Found 14 events.
Oct  2 23:09:01.660: INFO: At 2021-10-02 23:08:39 +0000 UTC - event for hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pw7vp: {default-scheduler } Scheduled: Successfully assigned provisioning-3649/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pw7vp to ip-172-20-34-88.ap-south-1.compute.internal
Oct  2 23:09:01.660: INFO: At 2021-10-02 23:08:40 +0000 UTC - event for hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pw7vp: {kubelet ip-172-20-34-88.ap-south-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-6jwlh" : failed to sync configmap cache: timed out waiting for the condition
Oct  2 23:09:01.660: INFO: At 2021-10-02 23:08:41 +0000 UTC - event for hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pw7vp: {kubelet ip-172-20-34-88.ap-south-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct  2 23:09:01.660: INFO: At 2021-10-02 23:08:41 +0000 UTC - event for hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pw7vp: {kubelet ip-172-20-34-88.ap-south-1.compute.internal} Created: Created container agnhost-container
Oct  2 23:09:01.660: INFO: At 2021-10-02 23:08:41 +0000 UTC - event for hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pw7vp: {kubelet ip-172-20-34-88.ap-south-1.compute.internal} Started: Started container agnhost-container
Oct  2 23:09:01.660: INFO: At 2021-10-02 23:08:47 +0000 UTC - event for pvc-8cr2x: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "provisioning-3649" not found
Oct  2 23:09:01.660: INFO: At 2021-10-02 23:08:51 +0000 UTC - event for pod-subpath-test-preprovisionedpv-c64m: {default-scheduler } Scheduled: Successfully assigned provisioning-3649/pod-subpath-test-preprovisionedpv-c64m to ip-172-20-34-88.ap-south-1.compute.internal
Oct  2 23:09:01.660: INFO: At 2021-10-02 23:08:51 +0000 UTC - event for pod-subpath-test-preprovisionedpv-c64m: {kubelet ip-172-20-34-88.ap-south-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
... skipping 242 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Oct  2 23:09:01.169: Unexpected error:
          <exec.CodeExitError>: {
              Err: {
                  s: "command terminated with exit code 1",
              },
              Code: 1,
          }
          command terminated with exit code 1
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:118
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":68,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:11.229: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 48 lines ...
• [SLOW TEST:22.678 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:09:13.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5252" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:20.685 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] health handlers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:09:16.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "health-5072" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":3,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:16.819: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 150 lines ...
Oct  2 23:08:29.714: INFO: PersistentVolumeClaim csi-hostpathvp28z found but phase is Pending instead of Bound.
Oct  2 23:08:31.949: INFO: PersistentVolumeClaim csi-hostpathvp28z found but phase is Pending instead of Bound.
Oct  2 23:08:34.185: INFO: PersistentVolumeClaim csi-hostpathvp28z found but phase is Pending instead of Bound.
Oct  2 23:08:36.425: INFO: PersistentVolumeClaim csi-hostpathvp28z found and phase=Bound (6.946253091s)
STEP: Creating pod pod-subpath-test-dynamicpv-dz5l
STEP: Creating a pod to test subpath
Oct  2 23:08:37.139: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dz5l" in namespace "provisioning-1513" to be "Succeeded or Failed"
Oct  2 23:08:37.375: INFO: Pod "pod-subpath-test-dynamicpv-dz5l": Phase="Pending", Reason="", readiness=false. Elapsed: 235.939593ms
Oct  2 23:08:39.616: INFO: Pod "pod-subpath-test-dynamicpv-dz5l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476192476s
Oct  2 23:08:41.854: INFO: Pod "pod-subpath-test-dynamicpv-dz5l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.714486008s
Oct  2 23:08:44.090: INFO: Pod "pod-subpath-test-dynamicpv-dz5l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.950288311s
Oct  2 23:08:46.329: INFO: Pod "pod-subpath-test-dynamicpv-dz5l": Phase="Pending", Reason="", readiness=false. Elapsed: 9.189422164s
Oct  2 23:08:48.565: INFO: Pod "pod-subpath-test-dynamicpv-dz5l": Phase="Pending", Reason="", readiness=false. Elapsed: 11.425637486s
Oct  2 23:08:50.802: INFO: Pod "pod-subpath-test-dynamicpv-dz5l": Phase="Pending", Reason="", readiness=false. Elapsed: 13.662140599s
Oct  2 23:08:53.038: INFO: Pod "pod-subpath-test-dynamicpv-dz5l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.898723953s
STEP: Saw pod success
Oct  2 23:08:53.038: INFO: Pod "pod-subpath-test-dynamicpv-dz5l" satisfied condition "Succeeded or Failed"
Oct  2 23:08:53.273: INFO: Trying to get logs from node ip-172-20-40-74.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-dz5l container test-container-volume-dynamicpv-dz5l: <nil>
STEP: delete the pod
Oct  2 23:08:53.753: INFO: Waiting for pod pod-subpath-test-dynamicpv-dz5l to disappear
Oct  2 23:08:53.988: INFO: Pod pod-subpath-test-dynamicpv-dz5l no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dz5l
Oct  2 23:08:53.988: INFO: Deleting pod "pod-subpath-test-dynamicpv-dz5l" in namespace "provisioning-1513"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:19.269: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should verify that all csinodes have volume limits
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumelimits.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":5,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 99 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":4,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:21.770: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 42 lines ...
Oct  2 23:07:59.082: INFO: PersistentVolumeClaim pvc-jkglb found and phase=Bound (235.662455ms)
Oct  2 23:07:59.082: INFO: Waiting up to 3m0s for PersistentVolume nfs-4r4c8 to have phase Bound
Oct  2 23:07:59.318: INFO: PersistentVolume nfs-4r4c8 found and phase=Bound (235.863564ms)
STEP: Checking pod has write access to PersistentVolume
Oct  2 23:07:59.789: INFO: Creating nfs test pod
Oct  2 23:08:00.040: INFO: Pod should terminate with exitcode 0 (success)
Oct  2 23:08:00.040: INFO: Waiting up to 5m0s for pod "pvc-tester-p4mhm" in namespace "pv-5014" to be "Succeeded or Failed"
Oct  2 23:08:00.278: INFO: Pod "pvc-tester-p4mhm": Phase="Pending", Reason="", readiness=false. Elapsed: 237.927665ms
Oct  2 23:08:02.515: INFO: Pod "pvc-tester-p4mhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475203435s
Oct  2 23:08:04.754: INFO: Pod "pvc-tester-p4mhm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.713774552s
Oct  2 23:08:06.993: INFO: Pod "pvc-tester-p4mhm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953208533s
Oct  2 23:08:09.231: INFO: Pod "pvc-tester-p4mhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.191141061s
STEP: Saw pod success
Oct  2 23:08:09.231: INFO: Pod "pvc-tester-p4mhm" satisfied condition "Succeeded or Failed"
Oct  2 23:08:09.231: INFO: Pod pvc-tester-p4mhm succeeded 
Oct  2 23:08:09.231: INFO: Deleting pod "pvc-tester-p4mhm" in namespace "pv-5014"
Oct  2 23:08:09.479: INFO: Wait up to 5m0s for pod "pvc-tester-p4mhm" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  2 23:08:09.715: INFO: Deleting PVC pvc-jkglb to trigger reclamation of PV nfs-4r4c8
Oct  2 23:08:09.715: INFO: Deleting PersistentVolumeClaim "pvc-jkglb"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":3,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:22.140: INFO: Only supported for providers [vsphere] (not aws)
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:09:18.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98d0142f-61c2-4c45-b4fb-79ed9d116962" in namespace "projected-6567" to be "Succeeded or Failed"
Oct  2 23:09:18.556: INFO: Pod "downwardapi-volume-98d0142f-61c2-4c45-b4fb-79ed9d116962": Phase="Pending", Reason="", readiness=false. Elapsed: 236.965205ms
Oct  2 23:09:20.795: INFO: Pod "downwardapi-volume-98d0142f-61c2-4c45-b4fb-79ed9d116962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.47520382s
STEP: Saw pod success
Oct  2 23:09:20.795: INFO: Pod "downwardapi-volume-98d0142f-61c2-4c45-b4fb-79ed9d116962" satisfied condition "Succeeded or Failed"
Oct  2 23:09:21.035: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod downwardapi-volume-98d0142f-61c2-4c45-b4fb-79ed9d116962 container client-container: <nil>
STEP: delete the pod
Oct  2 23:09:21.519: INFO: Waiting for pod downwardapi-volume-98d0142f-61c2-4c45-b4fb-79ed9d116962 to disappear
Oct  2 23:09:21.756: INFO: Pod downwardapi-volume-98d0142f-61c2-4c45-b4fb-79ed9d116962 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.345 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:22.246: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 72 lines ...
• [SLOW TEST:7.701 seconds]
[sig-network] Ingress API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support creating Ingress API operations [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":5,"skipped":40,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:29.551: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 86 lines ...
• [SLOW TEST:20.651 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":5,"skipped":75,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
Oct  2 23:09:21.666: INFO: PersistentVolumeClaim pvc-k8glx found and phase=Bound (9.223574911s)
Oct  2 23:09:21.666: INFO: Waiting up to 3m0s for PersistentVolume nfs-65678 to have phase Bound
Oct  2 23:09:21.909: INFO: PersistentVolume nfs-65678 found and phase=Bound (242.566694ms)
STEP: Checking pod has write access to PersistentVolume
Oct  2 23:09:22.399: INFO: Creating nfs test pod
Oct  2 23:09:22.643: INFO: Pod should terminate with exitcode 0 (success)
Oct  2 23:09:22.643: INFO: Waiting up to 5m0s for pod "pvc-tester-sw98l" in namespace "pv-4805" to be "Succeeded or Failed"
Oct  2 23:09:22.886: INFO: Pod "pvc-tester-sw98l": Phase="Pending", Reason="", readiness=false. Elapsed: 242.756045ms
Oct  2 23:09:25.135: INFO: Pod "pvc-tester-sw98l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491575838s
Oct  2 23:09:27.380: INFO: Pod "pvc-tester-sw98l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.736903143s
STEP: Saw pod success
Oct  2 23:09:27.380: INFO: Pod "pvc-tester-sw98l" satisfied condition "Succeeded or Failed"
Oct  2 23:09:27.380: INFO: Pod pvc-tester-sw98l succeeded 
Oct  2 23:09:27.380: INFO: Deleting pod "pvc-tester-sw98l" in namespace "pv-4805"
Oct  2 23:09:27.631: INFO: Wait up to 5m0s for pod "pvc-tester-sw98l" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  2 23:09:27.874: INFO: Deleting PVC pvc-k8glx to trigger reclamation of PV 
Oct  2 23:09:27.874: INFO: Deleting PersistentVolumeClaim "pvc-k8glx"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":3,"skipped":21,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:32.392: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 91 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-bb86b168-517d-4a9f-8d37-563faaa06ca2
STEP: Creating a pod to test consume configMaps
Oct  2 23:09:31.342: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fef83052-7a1d-449a-b615-51f7558999e7" in namespace "projected-7685" to be "Succeeded or Failed"
Oct  2 23:09:31.589: INFO: Pod "pod-projected-configmaps-fef83052-7a1d-449a-b615-51f7558999e7": Phase="Pending", Reason="", readiness=false. Elapsed: 247.496045ms
Oct  2 23:09:33.840: INFO: Pod "pod-projected-configmaps-fef83052-7a1d-449a-b615-51f7558999e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.498213609s
STEP: Saw pod success
Oct  2 23:09:33.840: INFO: Pod "pod-projected-configmaps-fef83052-7a1d-449a-b615-51f7558999e7" satisfied condition "Succeeded or Failed"
Oct  2 23:09:34.088: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-projected-configmaps-fef83052-7a1d-449a-b615-51f7558999e7 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:09:34.591: INFO: Waiting for pod pod-projected-configmaps-fef83052-7a1d-449a-b615-51f7558999e7 to disappear
Oct  2 23:09:34.839: INFO: Pod pod-projected-configmaps-fef83052-7a1d-449a-b615-51f7558999e7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.734 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:35.387: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 149 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:36.571: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 87 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:09:33.395: INFO: Waiting up to 5m0s for pod "metadata-volume-2dc01cdf-8416-4b52-aa27-96e47060b9a8" in namespace "downward-api-4759" to be "Succeeded or Failed"
Oct  2 23:09:33.637: INFO: Pod "metadata-volume-2dc01cdf-8416-4b52-aa27-96e47060b9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 241.881665ms
Oct  2 23:09:35.880: INFO: Pod "metadata-volume-2dc01cdf-8416-4b52-aa27-96e47060b9a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.485367119s
STEP: Saw pod success
Oct  2 23:09:35.880: INFO: Pod "metadata-volume-2dc01cdf-8416-4b52-aa27-96e47060b9a8" satisfied condition "Succeeded or Failed"
Oct  2 23:09:36.122: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod metadata-volume-2dc01cdf-8416-4b52-aa27-96e47060b9a8 container client-container: <nil>
STEP: delete the pod
Oct  2 23:09:36.616: INFO: Waiting for pod metadata-volume-2dc01cdf-8416-4b52-aa27-96e47060b9a8 to disappear
Oct  2 23:09:36.859: INFO: Pod metadata-volume-2dc01cdf-8416-4b52-aa27-96e47060b9a8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.411 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":76,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:37.384: INFO: Only supported for providers [azure] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 70 lines ...
Oct  2 23:09:32.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Oct  2 23:09:33.934: INFO: Waiting up to 5m0s for pod "client-containers-bfe03fea-ee0b-495f-b2cd-8de6e5f2b703" in namespace "containers-1578" to be "Succeeded or Failed"
Oct  2 23:09:34.177: INFO: Pod "client-containers-bfe03fea-ee0b-495f-b2cd-8de6e5f2b703": Phase="Pending", Reason="", readiness=false. Elapsed: 242.771405ms
Oct  2 23:09:36.421: INFO: Pod "client-containers-bfe03fea-ee0b-495f-b2cd-8de6e5f2b703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.486938009s
STEP: Saw pod success
Oct  2 23:09:36.421: INFO: Pod "client-containers-bfe03fea-ee0b-495f-b2cd-8de6e5f2b703" satisfied condition "Succeeded or Failed"
Oct  2 23:09:36.664: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod client-containers-bfe03fea-ee0b-495f-b2cd-8de6e5f2b703 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:09:37.156: INFO: Waiting for pod client-containers-bfe03fea-ee0b-495f-b2cd-8de6e5f2b703 to disappear
Oct  2 23:09:37.401: INFO: Pod client-containers-bfe03fea-ee0b-495f-b2cd-8de6e5f2b703 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.415 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":42,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 151 lines ...
Oct  2 23:09:02.136: INFO: PersistentVolumeClaim csi-hostpathg98lt found but phase is Pending instead of Bound.
Oct  2 23:09:04.384: INFO: PersistentVolumeClaim csi-hostpathg98lt found but phase is Pending instead of Bound.
Oct  2 23:09:06.629: INFO: PersistentVolumeClaim csi-hostpathg98lt found but phase is Pending instead of Bound.
Oct  2 23:09:08.874: INFO: PersistentVolumeClaim csi-hostpathg98lt found and phase=Bound (27.204954007s)
STEP: Creating pod pod-subpath-test-dynamicpv-xpfr
STEP: Creating a pod to test subpath
Oct  2 23:09:09.615: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xpfr" in namespace "provisioning-4897" to be "Succeeded or Failed"
Oct  2 23:09:09.860: INFO: Pod "pod-subpath-test-dynamicpv-xpfr": Phase="Pending", Reason="", readiness=false. Elapsed: 245.089894ms
Oct  2 23:09:12.106: INFO: Pod "pod-subpath-test-dynamicpv-xpfr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490338058s
Oct  2 23:09:14.352: INFO: Pod "pod-subpath-test-dynamicpv-xpfr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.736498812s
Oct  2 23:09:16.598: INFO: Pod "pod-subpath-test-dynamicpv-xpfr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.982953716s
STEP: Saw pod success
Oct  2 23:09:16.598: INFO: Pod "pod-subpath-test-dynamicpv-xpfr" satisfied condition "Succeeded or Failed"
Oct  2 23:09:16.843: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-xpfr container test-container-subpath-dynamicpv-xpfr: <nil>
STEP: delete the pod
Oct  2 23:09:17.344: INFO: Waiting for pod pod-subpath-test-dynamicpv-xpfr to disappear
Oct  2 23:09:17.589: INFO: Pod pod-subpath-test-dynamicpv-xpfr no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xpfr
Oct  2 23:09:17.590: INFO: Deleting pod "pod-subpath-test-dynamicpv-xpfr" in namespace "provisioning-4897"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 52 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:439
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":6,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:42.033: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
• [SLOW TEST:5.175 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":7,"skipped":95,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
Oct  2 23:08:17.807: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fz5lj] to have phase Bound
Oct  2 23:08:18.042: INFO: PersistentVolumeClaim pvc-fz5lj found and phase=Bound (235.159044ms)
STEP: Deleting the previously created pod
Oct  2 23:08:25.227: INFO: Deleting pod "pvc-volume-tester-6vbz8" in namespace "csi-mock-volumes-4478"
Oct  2 23:08:25.477: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6vbz8" to be fully deleted
STEP: Checking CSI driver logs
Oct  2 23:08:28.187: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/78f10f3b-4c55-429d-9ebf-8d032c1dea31/volumes/kubernetes.io~csi/pvc-285c326e-9b4b-459e-8b10-74e76c3323d9/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-6vbz8
Oct  2 23:08:28.187: INFO: Deleting pod "pvc-volume-tester-6vbz8" in namespace "csi-mock-volumes-4478"
STEP: Deleting claim pvc-fz5lj
Oct  2 23:08:28.903: INFO: Waiting up to 2m0s for PersistentVolume pvc-285c326e-9b4b-459e-8b10-74e76c3323d9 to get deleted
Oct  2 23:08:29.161: INFO: PersistentVolume pvc-285c326e-9b4b-459e-8b10-74e76c3323d9 found and phase=Released (257.765263ms)
Oct  2 23:08:31.397: INFO: PersistentVolume pvc-285c326e-9b4b-459e-8b10-74e76c3323d9 found and phase=Released (2.493928025s)
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497
    token should not be plumbed down when csiServiceAccountTokenEnabled=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":4,"skipped":8,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 40 lines ...
Oct  2 23:09:33.327: INFO: PersistentVolumeClaim pvc-tkmdn found but phase is Pending instead of Bound.
Oct  2 23:09:35.576: INFO: PersistentVolumeClaim pvc-tkmdn found and phase=Bound (13.74396833s)
Oct  2 23:09:35.576: INFO: Waiting up to 3m0s for PersistentVolume local-4ckmd to have phase Bound
Oct  2 23:09:35.826: INFO: PersistentVolume local-4ckmd found and phase=Bound (250.177504ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zxxb
STEP: Creating a pod to test subpath
Oct  2 23:09:36.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zxxb" in namespace "provisioning-8915" to be "Succeeded or Failed"
Oct  2 23:09:36.819: INFO: Pod "pod-subpath-test-preprovisionedpv-zxxb": Phase="Pending", Reason="", readiness=false. Elapsed: 247.752504ms
Oct  2 23:09:39.068: INFO: Pod "pod-subpath-test-preprovisionedpv-zxxb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.496242399s
Oct  2 23:09:41.316: INFO: Pod "pod-subpath-test-preprovisionedpv-zxxb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.744398635s
STEP: Saw pod success
Oct  2 23:09:41.316: INFO: Pod "pod-subpath-test-preprovisionedpv-zxxb" satisfied condition "Succeeded or Failed"
Oct  2 23:09:41.564: INFO: Trying to get logs from node ip-172-20-40-74.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-zxxb container test-container-subpath-preprovisionedpv-zxxb: <nil>
STEP: delete the pod
Oct  2 23:09:42.069: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zxxb to disappear
Oct  2 23:09:42.318: INFO: Pod pod-subpath-test-preprovisionedpv-zxxb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zxxb
Oct  2 23:09:42.318: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zxxb" in namespace "provisioning-8915"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 16 lines ...
Oct  2 23:09:19.533: INFO: PersistentVolumeClaim pvc-24mh2 found but phase is Pending instead of Bound.
Oct  2 23:09:21.785: INFO: PersistentVolumeClaim pvc-24mh2 found and phase=Bound (4.752444822s)
Oct  2 23:09:21.785: INFO: Waiting up to 3m0s for PersistentVolume aws-wpqt4 to have phase Bound
Oct  2 23:09:22.035: INFO: PersistentVolume aws-wpqt4 found and phase=Bound (250.180724ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-pw5c
STEP: Creating a pod to test exec-volume-test
Oct  2 23:09:22.789: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-pw5c" in namespace "volume-1173" to be "Succeeded or Failed"
Oct  2 23:09:23.045: INFO: Pod "exec-volume-test-preprovisionedpv-pw5c": Phase="Pending", Reason="", readiness=false. Elapsed: 256.033162ms
Oct  2 23:09:25.297: INFO: Pod "exec-volume-test-preprovisionedpv-pw5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.508149776s
Oct  2 23:09:27.547: INFO: Pod "exec-volume-test-preprovisionedpv-pw5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75861126s
Oct  2 23:09:29.800: INFO: Pod "exec-volume-test-preprovisionedpv-pw5c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.011173084s
Oct  2 23:09:32.050: INFO: Pod "exec-volume-test-preprovisionedpv-pw5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.261698629s
STEP: Saw pod success
Oct  2 23:09:32.050: INFO: Pod "exec-volume-test-preprovisionedpv-pw5c" satisfied condition "Succeeded or Failed"
Oct  2 23:09:32.302: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-pw5c container exec-container-preprovisionedpv-pw5c: <nil>
STEP: delete the pod
Oct  2 23:09:32.813: INFO: Waiting for pod exec-volume-test-preprovisionedpv-pw5c to disappear
Oct  2 23:09:33.063: INFO: Pod exec-volume-test-preprovisionedpv-pw5c no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-pw5c
Oct  2 23:09:33.064: INFO: Deleting pod "exec-volume-test-preprovisionedpv-pw5c" in namespace "volume-1173"
STEP: Deleting pv and pvc
Oct  2 23:09:33.314: INFO: Deleting PersistentVolumeClaim "pvc-24mh2"
Oct  2 23:09:33.566: INFO: Deleting PersistentVolume "aws-wpqt4"
Oct  2 23:09:34.195: INFO: Couldn't delete PD "aws://ap-south-1a/vol-00e72fcc3d5620905", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-00e72fcc3d5620905 is currently attached to i-075a98111b6649d4c
	status code: 400, request id: 74ec6590-01b3-4e4d-9540-8db61d7498f7
Oct  2 23:09:40.415: INFO: Couldn't delete PD "aws://ap-south-1a/vol-00e72fcc3d5620905", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-00e72fcc3d5620905 is currently attached to i-075a98111b6649d4c
	status code: 400, request id: 8cdff3dc-36a9-4d0c-882e-a7a528c09d12
Oct  2 23:09:46.535: INFO: Successfully deleted PD "aws://ap-south-1a/vol-00e72fcc3d5620905".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:09:46.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1173" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":24,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:47.125: INFO: Driver local doesn't support ext4 -- skipping
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:54
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:65
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":6,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:47.625: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 140 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:803
    should reuse port when apply to an existing SVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:817
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":6,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:48.469: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 89 lines ...
Oct  2 23:09:33.614: INFO: PersistentVolumeClaim pvc-n8gx4 found but phase is Pending instead of Bound.
Oct  2 23:09:35.858: INFO: PersistentVolumeClaim pvc-n8gx4 found and phase=Bound (13.707259923s)
Oct  2 23:09:35.858: INFO: Waiting up to 3m0s for PersistentVolume local-8z4vm to have phase Bound
Oct  2 23:09:36.100: INFO: PersistentVolume local-8z4vm found and phase=Bound (242.437704ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c24k
STEP: Creating a pod to test subpath
Oct  2 23:09:36.832: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c24k" in namespace "provisioning-3079" to be "Succeeded or Failed"
Oct  2 23:09:37.076: INFO: Pod "pod-subpath-test-preprovisionedpv-c24k": Phase="Pending", Reason="", readiness=false. Elapsed: 243.504115ms
Oct  2 23:09:39.323: INFO: Pod "pod-subpath-test-preprovisionedpv-c24k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49129174s
Oct  2 23:09:41.569: INFO: Pod "pod-subpath-test-preprovisionedpv-c24k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.736389416s
STEP: Saw pod success
Oct  2 23:09:41.569: INFO: Pod "pod-subpath-test-preprovisionedpv-c24k" satisfied condition "Succeeded or Failed"
Oct  2 23:09:41.812: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-c24k container test-container-subpath-preprovisionedpv-c24k: <nil>
STEP: delete the pod
Oct  2 23:09:42.303: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c24k to disappear
Oct  2 23:09:42.546: INFO: Pod pod-subpath-test-preprovisionedpv-c24k no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c24k
Oct  2 23:09:42.546: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c24k" in namespace "provisioning-3079"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:09:45.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
STEP: Creating a pod to test downward api env vars
Oct  2 23:09:47.157: INFO: Waiting up to 5m0s for pod "downward-api-2d457624-3430-41c7-960d-760f1049093e" in namespace "downward-api-401" to be "Succeeded or Failed"
Oct  2 23:09:47.408: INFO: Pod "downward-api-2d457624-3430-41c7-960d-760f1049093e": Phase="Pending", Reason="", readiness=false. Elapsed: 251.407494ms
Oct  2 23:09:49.657: INFO: Pod "downward-api-2d457624-3430-41c7-960d-760f1049093e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.4997506s
STEP: Saw pod success
Oct  2 23:09:49.657: INFO: Pod "downward-api-2d457624-3430-41c7-960d-760f1049093e" satisfied condition "Succeeded or Failed"
Oct  2 23:09:49.905: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod downward-api-2d457624-3430-41c7-960d-760f1049093e container dapi-container: <nil>
STEP: delete the pod
Oct  2 23:09:50.411: INFO: Waiting for pod downward-api-2d457624-3430-41c7-960d-760f1049093e to disappear
Oct  2 23:09:50.678: INFO: Pod downward-api-2d457624-3430-41c7-960d-760f1049093e no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 60 lines ...
• [SLOW TEST:29.568 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":4,"skipped":35,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:51.803: INFO: Only supported for providers [gce gke] (not aws)
... skipping 194 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":6,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:09:51.260: INFO: >>> kubeConfig: /root/.kube/config
... skipping 89 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Listing PodDisruptionBudgets for all namespaces
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75
    should list and delete a collection of PodDisruptionBudgets [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":7,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:53.619: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 118 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":4,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:54.571: INFO: Only supported for providers [openstack] (not aws)
... skipping 81 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:08:30.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 141 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
Oct  2 23:09:57.467: INFO: AfterEach: Cleaning up test resources.
Oct  2 23:09:57.467: INFO: Deleting PersistentVolumeClaim "pvc-qh9t4"
Oct  2 23:09:57.715: INFO: Deleting PersistentVolume "hostpath-vljn6"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":7,"skipped":59,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:79.388 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should observe that the PodDisruptionBudget status is not updated for unmanaged pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:191
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:09:59.564: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 166 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity disabled
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":2,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:111.108 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:02.614: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull from private registry with secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":7,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:04.288: INFO: Only supported for providers [azure] (not aws)
... skipping 139 lines ...
STEP: SSH'ing host 13.235.33.230:22
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
STEP: SSH'ing host 13.235.33.230:22
Oct  2 23:10:00.224: INFO: Got stdout from 13.235.33.230:22: stdout
Oct  2 23:10:00.224: INFO: Got stderr from 13.235.33.230:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing ec2-user@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:10:05.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-7150" for this suite.


• [SLOW TEST:27.784 seconds]
[sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":5,"skipped":46,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:09:53.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct  2 23:09:55.105: INFO: Waiting up to 5m0s for pod "pod-22de3ba9-450a-4af0-a50e-711a24add189" in namespace "emptydir-6397" to be "Succeeded or Failed"
Oct  2 23:09:55.346: INFO: Pod "pod-22de3ba9-450a-4af0-a50e-711a24add189": Phase="Pending", Reason="", readiness=false. Elapsed: 241.079135ms
Oct  2 23:09:57.594: INFO: Pod "pod-22de3ba9-450a-4af0-a50e-711a24add189": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489150941s
Oct  2 23:09:59.837: INFO: Pod "pod-22de3ba9-450a-4af0-a50e-711a24add189": Phase="Pending", Reason="", readiness=false. Elapsed: 4.731934698s
Oct  2 23:10:02.080: INFO: Pod "pod-22de3ba9-450a-4af0-a50e-711a24add189": Phase="Pending", Reason="", readiness=false. Elapsed: 6.974813346s
Oct  2 23:10:04.322: INFO: Pod "pod-22de3ba9-450a-4af0-a50e-711a24add189": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.216834142s
STEP: Saw pod success
Oct  2 23:10:04.322: INFO: Pod "pod-22de3ba9-450a-4af0-a50e-711a24add189" satisfied condition "Succeeded or Failed"
Oct  2 23:10:04.567: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-22de3ba9-450a-4af0-a50e-711a24add189 container test-container: <nil>
STEP: delete the pod
Oct  2 23:10:05.057: INFO: Waiting for pod pod-22de3ba9-450a-4af0-a50e-711a24add189 to disappear
Oct  2 23:10:05.305: INFO: Pod pod-22de3ba9-450a-4af0-a50e-711a24add189 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 10 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:09:52.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:10:05.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7965" for this suite.


• [SLOW TEST:14.135 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":5,"skipped":67,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:06.232: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-b0b655d8-4946-4fbd-ab2a-2261c459a01e
STEP: Creating a pod to test consume configMaps
Oct  2 23:09:52.264: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c" in namespace "projected-3906" to be "Succeeded or Failed"
Oct  2 23:09:52.507: INFO: Pod "pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c": Phase="Pending", Reason="", readiness=false. Elapsed: 242.795975ms
Oct  2 23:09:54.751: INFO: Pod "pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486324422s
Oct  2 23:09:56.996: INFO: Pod "pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.731623679s
Oct  2 23:09:59.240: INFO: Pod "pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.975853605s
Oct  2 23:10:01.489: INFO: Pod "pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.224240182s
Oct  2 23:10:03.732: INFO: Pod "pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.467905949s
Oct  2 23:10:05.976: INFO: Pod "pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.711910326s
STEP: Saw pod success
Oct  2 23:10:05.977: INFO: Pod "pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c" satisfied condition "Succeeded or Failed"
Oct  2 23:10:06.219: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:10:06.718: INFO: Waiting for pod pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c to disappear
Oct  2 23:10:06.961: INFO: Pod pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.954 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":16,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:07.528: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 111 lines ...
• [SLOW TEST:7.002 seconds]
[sig-api-machinery] Discovery
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should validate PreferredVersion for each APIGroup [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:09.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Oct  2 23:10:11.248: INFO: Waiting up to 5m0s for pod "client-containers-98d581d1-7fac-4a6f-ab3c-4ff40d7d7f67" in namespace "containers-3036" to be "Succeeded or Failed"
Oct  2 23:10:11.507: INFO: Pod "client-containers-98d581d1-7fac-4a6f-ab3c-4ff40d7d7f67": Phase="Pending", Reason="", readiness=false. Elapsed: 259.005293ms
Oct  2 23:10:13.746: INFO: Pod "client-containers-98d581d1-7fac-4a6f-ab3c-4ff40d7d7f67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.497758741s
STEP: Saw pod success
Oct  2 23:10:13.746: INFO: Pod "client-containers-98d581d1-7fac-4a6f-ab3c-4ff40d7d7f67" satisfied condition "Succeeded or Failed"
Oct  2 23:10:13.984: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod client-containers-98d581d1-7fac-4a6f-ab3c-4ff40d7d7f67 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:10:14.468: INFO: Waiting for pod client-containers-98d581d1-7fac-4a6f-ab3c-4ff40d7d7f67 to disappear
Oct  2 23:10:14.706: INFO: Pod client-containers-98d581d1-7fac-4a6f-ab3c-4ff40d7d7f67 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 59 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:478
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:482
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Oct  2 23:10:07.738: INFO: Waiting up to 5m0s for pod "pod-always-succeed6437469c-2a17-45fb-97de-50bf97074b7e" in namespace "pods-4192" to be "Succeeded or Failed"
Oct  2 23:10:07.974: INFO: Pod "pod-always-succeed6437469c-2a17-45fb-97de-50bf97074b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 235.547845ms
Oct  2 23:10:10.210: INFO: Pod "pod-always-succeed6437469c-2a17-45fb-97de-50bf97074b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472187933s
Oct  2 23:10:12.446: INFO: Pod "pod-always-succeed6437469c-2a17-45fb-97de-50bf97074b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.708182661s
Oct  2 23:10:14.683: INFO: Pod "pod-always-succeed6437469c-2a17-45fb-97de-50bf97074b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.944560589s
Oct  2 23:10:16.920: INFO: Pod "pod-always-succeed6437469c-2a17-45fb-97de-50bf97074b7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.181561556s
STEP: Saw pod success
Oct  2 23:10:16.920: INFO: Pod "pod-always-succeed6437469c-2a17-45fb-97de-50bf97074b7e" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:10:19.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:476
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:482
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":6,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:19.884: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 37 lines ...
      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:15.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct  2 23:10:16.633: INFO: Waiting up to 5m0s for pod "security-context-d9f9457e-2093-4b37-82d6-31783be67f14" in namespace "security-context-2319" to be "Succeeded or Failed"
Oct  2 23:10:16.871: INFO: Pod "security-context-d9f9457e-2093-4b37-82d6-31783be67f14": Phase="Pending", Reason="", readiness=false. Elapsed: 238.308564ms
Oct  2 23:10:19.110: INFO: Pod "security-context-d9f9457e-2093-4b37-82d6-31783be67f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.477553023s
STEP: Saw pod success
Oct  2 23:10:19.111: INFO: Pod "security-context-d9f9457e-2093-4b37-82d6-31783be67f14" satisfied condition "Succeeded or Failed"
Oct  2 23:10:19.349: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod security-context-d9f9457e-2093-4b37-82d6-31783be67f14 container test-container: <nil>
STEP: delete the pod
Oct  2 23:10:19.840: INFO: Waiting for pod security-context-d9f9457e-2093-4b37-82d6-31783be67f14 to disappear
Oct  2 23:10:20.077: INFO: Pod security-context-d9f9457e-2093-4b37-82d6-31783be67f14 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 26 lines ...
• [SLOW TEST:62.186 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:21.489: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
Oct  2 23:10:21.566: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.651 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 22 lines ...
• [SLOW TEST:46.603 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":7,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:22.066: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 197 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":37,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:10:22.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1652" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":8,"skipped":53,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":2,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:09:53.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-8qk4
STEP: Creating a pod to test atomic-volume-subpath
Oct  2 23:09:54.984: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8qk4" in namespace "subpath-6611" to be "Succeeded or Failed"
Oct  2 23:09:55.228: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Pending", Reason="", readiness=false. Elapsed: 244.234245ms
Oct  2 23:09:57.473: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.488823981s
Oct  2 23:09:59.717: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.733286328s
Oct  2 23:10:01.963: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.978634064s
Oct  2 23:10:04.210: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Running", Reason="", readiness=true. Elapsed: 9.225730081s
Oct  2 23:10:06.457: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Running", Reason="", readiness=true. Elapsed: 11.472782858s
... skipping 2 lines ...
Oct  2 23:10:13.193: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Running", Reason="", readiness=true. Elapsed: 18.20895315s
Oct  2 23:10:15.438: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Running", Reason="", readiness=true. Elapsed: 20.453557578s
Oct  2 23:10:17.685: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Running", Reason="", readiness=true. Elapsed: 22.700349495s
Oct  2 23:10:19.930: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Running", Reason="", readiness=true. Elapsed: 24.946205973s
Oct  2 23:10:22.175: INFO: Pod "pod-subpath-test-configmap-8qk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.190947501s
STEP: Saw pod success
Oct  2 23:10:22.175: INFO: Pod "pod-subpath-test-configmap-8qk4" satisfied condition "Succeeded or Failed"
Oct  2 23:10:22.505: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-subpath-test-configmap-8qk4 container test-container-subpath-configmap-8qk4: <nil>
STEP: delete the pod
Oct  2 23:10:23.001: INFO: Waiting for pod pod-subpath-test-configmap-8qk4 to disappear
Oct  2 23:10:23.245: INFO: Pod pod-subpath-test-configmap-8qk4 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-8qk4
Oct  2 23:10:23.245: INFO: Deleting pod "pod-subpath-test-configmap-8qk4" in namespace "subpath-6611"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":8,"skipped":69,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:15.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
Oct  2 23:10:21.509: INFO: The status of Pod pod-update-activedeadlineseconds-bf67c154-ef85-4ff6-b121-24def99019a7 is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:10:23.509: INFO: The status of Pod pod-update-activedeadlineseconds-bf67c154-ef85-4ff6-b121-24def99019a7 is Running (Ready = true)
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct  2 23:10:25.010: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bf67c154-ef85-4ff6-b121-24def99019a7"
Oct  2 23:10:25.010: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bf67c154-ef85-4ff6-b121-24def99019a7" in namespace "pods-5104" to be "terminated due to deadline exceeded"
Oct  2 23:10:25.257: INFO: Pod "pod-update-activedeadlineseconds-bf67c154-ef85-4ff6-b121-24def99019a7": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 247.035275ms
Oct  2 23:10:25.257: INFO: Pod "pod-update-activedeadlineseconds-bf67c154-ef85-4ff6-b121-24def99019a7" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:10:25.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5104" for this suite.


• [SLOW TEST:10.242 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":69,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:431
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":8,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:27.385: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 80 lines ...
• [SLOW TEST:5.251 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:21.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct  2 23:10:23.009: INFO: Waiting up to 5m0s for pod "pod-bf581be3-0573-4ea7-a671-76626b0dbfaa" in namespace "emptydir-755" to be "Succeeded or Failed"
Oct  2 23:10:23.244: INFO: Pod "pod-bf581be3-0573-4ea7-a671-76626b0dbfaa": Phase="Pending", Reason="", readiness=false. Elapsed: 235.057655ms
Oct  2 23:10:25.480: INFO: Pod "pod-bf581be3-0573-4ea7-a671-76626b0dbfaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470863174s
Oct  2 23:10:27.716: INFO: Pod "pod-bf581be3-0573-4ea7-a671-76626b0dbfaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.706799833s
STEP: Saw pod success
Oct  2 23:10:27.716: INFO: Pod "pod-bf581be3-0573-4ea7-a671-76626b0dbfaa" satisfied condition "Succeeded or Failed"
Oct  2 23:10:27.951: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-bf581be3-0573-4ea7-a671-76626b0dbfaa container test-container: <nil>
STEP: delete the pod
Oct  2 23:10:28.435: INFO: Waiting for pod pod-bf581be3-0573-4ea7-a671-76626b0dbfaa to disappear
Oct  2 23:10:28.671: INFO: Pod pod-bf581be3-0573-4ea7-a671-76626b0dbfaa no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.553 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":90,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:29.184: INFO: Only supported for providers [vsphere] (not aws)
... skipping 71 lines ...
Oct  2 23:10:01.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
Oct  2 23:10:02.456: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  2 23:10:02.968: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4630" in namespace "provisioning-4630" to be "Succeeded or Failed"
Oct  2 23:10:03.213: INFO: Pod "hostpath-symlink-prep-provisioning-4630": Phase="Pending", Reason="", readiness=false. Elapsed: 245.015234ms
Oct  2 23:10:05.458: INFO: Pod "hostpath-symlink-prep-provisioning-4630": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489808921s
Oct  2 23:10:07.705: INFO: Pod "hostpath-symlink-prep-provisioning-4630": Phase="Pending", Reason="", readiness=false. Elapsed: 4.736386568s
Oct  2 23:10:09.950: INFO: Pod "hostpath-symlink-prep-provisioning-4630": Phase="Pending", Reason="", readiness=false. Elapsed: 6.981882665s
Oct  2 23:10:12.195: INFO: Pod "hostpath-symlink-prep-provisioning-4630": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.226951143s
STEP: Saw pod success
Oct  2 23:10:12.195: INFO: Pod "hostpath-symlink-prep-provisioning-4630" satisfied condition "Succeeded or Failed"
Oct  2 23:10:12.195: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4630" in namespace "provisioning-4630"
Oct  2 23:10:12.445: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4630" to be fully deleted
Oct  2 23:10:12.691: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-hwh5
Oct  2 23:10:21.427: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-4630 exec pod-subpath-test-inlinevolume-hwh5 --container test-container-volume-inlinevolume-hwh5 -- /bin/sh -c rm -r /test-volume/provisioning-4630'
Oct  2 23:10:23.732: INFO: stderr: ""
Oct  2 23:10:23.732: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-hwh5
Oct  2 23:10:23.732: INFO: Deleting pod "pod-subpath-test-inlinevolume-hwh5" in namespace "provisioning-4630"
Oct  2 23:10:23.980: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-hwh5" to be fully deleted
STEP: Deleting pod
Oct  2 23:10:26.475: INFO: Deleting pod "pod-subpath-test-inlinevolume-hwh5" in namespace "provisioning-4630"
Oct  2 23:10:26.965: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4630" in namespace "provisioning-4630" to be "Succeeded or Failed"
Oct  2 23:10:27.211: INFO: Pod "hostpath-symlink-prep-provisioning-4630": Phase="Pending", Reason="", readiness=false. Elapsed: 245.576054ms
Oct  2 23:10:29.455: INFO: Pod "hostpath-symlink-prep-provisioning-4630": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490312833s
Oct  2 23:10:31.700: INFO: Pod "hostpath-symlink-prep-provisioning-4630": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.735366841s
STEP: Saw pod success
Oct  2 23:10:31.701: INFO: Pod "hostpath-symlink-prep-provisioning-4630" satisfied condition "Succeeded or Failed"
Oct  2 23:10:31.701: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4630" in namespace "provisioning-4630"
Oct  2 23:10:31.949: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4630" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:10:32.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4630" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":35,"failed":0}

SSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":3,"skipped":1,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:09:56.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
Oct  2 23:10:23.951: INFO: Unable to read jessie_udp@dns-test-service.dns-9462 from pod dns-9462/dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379: the server could not find the requested resource (get pods dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379)
Oct  2 23:10:24.197: INFO: Unable to read jessie_tcp@dns-test-service.dns-9462 from pod dns-9462/dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379: the server could not find the requested resource (get pods dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379)
Oct  2 23:10:24.442: INFO: Unable to read jessie_udp@dns-test-service.dns-9462.svc from pod dns-9462/dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379: the server could not find the requested resource (get pods dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379)
Oct  2 23:10:24.689: INFO: Unable to read jessie_tcp@dns-test-service.dns-9462.svc from pod dns-9462/dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379: the server could not find the requested resource (get pods dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379)
Oct  2 23:10:24.935: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9462.svc from pod dns-9462/dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379: the server could not find the requested resource (get pods dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379)
Oct  2 23:10:25.181: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9462.svc from pod dns-9462/dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379: the server could not find the requested resource (get pods dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379)
Oct  2 23:10:26.654: INFO: Lookups using dns-9462/dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9462 wheezy_tcp@dns-test-service.dns-9462 wheezy_udp@dns-test-service.dns-9462.svc wheezy_tcp@dns-test-service.dns-9462.svc wheezy_udp@_http._tcp.dns-test-service.dns-9462.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9462.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9462 jessie_tcp@dns-test-service.dns-9462 jessie_udp@dns-test-service.dns-9462.svc jessie_tcp@dns-test-service.dns-9462.svc jessie_udp@_http._tcp.dns-test-service.dns-9462.svc jessie_tcp@_http._tcp.dns-test-service.dns-9462.svc]

Oct  2 23:10:38.596: INFO: DNS probes using dns-9462/dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:42.948 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":1,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:05.803: INFO: >>> kubeConfig: /root/.kube/config
... skipping 14 lines ...
Oct  2 23:10:18.784: INFO: PersistentVolumeClaim pvc-qbrp9 found but phase is Pending instead of Bound.
Oct  2 23:10:21.026: INFO: PersistentVolumeClaim pvc-qbrp9 found and phase=Bound (6.969806868s)
Oct  2 23:10:21.027: INFO: Waiting up to 3m0s for PersistentVolume local-z57vb to have phase Bound
Oct  2 23:10:21.268: INFO: PersistentVolume local-z57vb found and phase=Bound (241.202205ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-n2xq
STEP: Creating a pod to test subpath
Oct  2 23:10:21.999: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n2xq" in namespace "provisioning-3141" to be "Succeeded or Failed"
Oct  2 23:10:22.241: INFO: Pod "pod-subpath-test-preprovisionedpv-n2xq": Phase="Pending", Reason="", readiness=false. Elapsed: 241.892535ms
Oct  2 23:10:24.485: INFO: Pod "pod-subpath-test-preprovisionedpv-n2xq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485670883s
Oct  2 23:10:26.730: INFO: Pod "pod-subpath-test-preprovisionedpv-n2xq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.7302358s
STEP: Saw pod success
Oct  2 23:10:26.730: INFO: Pod "pod-subpath-test-preprovisionedpv-n2xq" satisfied condition "Succeeded or Failed"
Oct  2 23:10:26.971: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-n2xq container test-container-subpath-preprovisionedpv-n2xq: <nil>
STEP: delete the pod
Oct  2 23:10:27.461: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n2xq to disappear
Oct  2 23:10:27.704: INFO: Pod pod-subpath-test-preprovisionedpv-n2xq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n2xq
Oct  2 23:10:27.704: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n2xq" in namespace "provisioning-3141"
STEP: Creating pod pod-subpath-test-preprovisionedpv-n2xq
STEP: Creating a pod to test subpath
Oct  2 23:10:28.189: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n2xq" in namespace "provisioning-3141" to be "Succeeded or Failed"
Oct  2 23:10:28.430: INFO: Pod "pod-subpath-test-preprovisionedpv-n2xq": Phase="Pending", Reason="", readiness=false. Elapsed: 241.036824ms
Oct  2 23:10:30.672: INFO: Pod "pod-subpath-test-preprovisionedpv-n2xq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.482825053s
STEP: Saw pod success
Oct  2 23:10:30.672: INFO: Pod "pod-subpath-test-preprovisionedpv-n2xq" satisfied condition "Succeeded or Failed"
Oct  2 23:10:30.914: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-n2xq container test-container-subpath-preprovisionedpv-n2xq: <nil>
STEP: delete the pod
Oct  2 23:10:31.403: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n2xq to disappear
Oct  2 23:10:31.645: INFO: Pod pod-subpath-test-preprovisionedpv-n2xq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n2xq
Oct  2 23:10:31.645: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n2xq" in namespace "provisioning-3141"
... skipping 6 lines ...
Oct  2 23:10:32.625: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-driver-f05c69d5-3eda-44a2-89fc-29a31e37d0b0 && umount /tmp/local-driver-f05c69d5-3eda-44a2-89fc-29a31e37d0b0-backend && rm -r /tmp/local-driver-f05c69d5-3eda-44a2-89fc-29a31e37d0b0-backend] Namespace:provisioning-3141 PodName:hostexec-ip-172-20-34-88.ap-south-1.compute.internal-5vr4x ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  2 23:10:32.625: INFO: >>> kubeConfig: /root/.kube/config
Oct  2 23:10:34.174: INFO: exec ip-172-20-34-88.ap-south-1.compute.internal: command:   rm /tmp/local-driver-f05c69d5-3eda-44a2-89fc-29a31e37d0b0 && umount /tmp/local-driver-f05c69d5-3eda-44a2-89fc-29a31e37d0b0-backend && rm -r /tmp/local-driver-f05c69d5-3eda-44a2-89fc-29a31e37d0b0-backend
Oct  2 23:10:34.174: INFO: exec ip-172-20-34-88.ap-south-1.compute.internal: stdout:    ""
Oct  2 23:10:34.174: INFO: exec ip-172-20-34-88.ap-south-1.compute.internal: stderr:    "rm: cannot remove '/tmp/local-driver-f05c69d5-3eda-44a2-89fc-29a31e37d0b0-backend': Device or resource busy\n"
Oct  2 23:10:34.174: INFO: exec ip-172-20-34-88.ap-south-1.compute.internal: exit code: 0
Oct  2 23:10:34.174: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "command terminated with exit code 1",
        },
        Code: 1,
    }
... skipping 281 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395

      Oct  2 23:10:34.174: Unexpected error:
          <exec.CodeExitError>: {
              Err: {
                  s: "command terminated with exit code 1",
              },
              Code: 1,
          }
          command terminated with exit code 1
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:271
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":5,"skipped":6,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:09:48.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
• [SLOW TEST:57.427 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":6,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Oct  2 23:10:34.009: INFO: PersistentVolumeClaim pvc-cfhh6 found but phase is Pending instead of Bound.
Oct  2 23:10:36.260: INFO: PersistentVolumeClaim pvc-cfhh6 found and phase=Bound (2.501266872s)
Oct  2 23:10:36.260: INFO: Waiting up to 3m0s for PersistentVolume local-rjxxc to have phase Bound
Oct  2 23:10:36.510: INFO: PersistentVolume local-rjxxc found and phase=Bound (249.822864ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4zkp
STEP: Creating a pod to test subpath
Oct  2 23:10:37.262: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4zkp" in namespace "provisioning-7372" to be "Succeeded or Failed"
Oct  2 23:10:37.512: INFO: Pod "pod-subpath-test-preprovisionedpv-4zkp": Phase="Pending", Reason="", readiness=false. Elapsed: 250.045444ms
Oct  2 23:10:39.763: INFO: Pod "pod-subpath-test-preprovisionedpv-4zkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.501306452s
Oct  2 23:10:42.014: INFO: Pod "pod-subpath-test-preprovisionedpv-4zkp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.752275961s
STEP: Saw pod success
Oct  2 23:10:42.015: INFO: Pod "pod-subpath-test-preprovisionedpv-4zkp" satisfied condition "Succeeded or Failed"
Oct  2 23:10:42.265: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-4zkp container test-container-subpath-preprovisionedpv-4zkp: <nil>
STEP: delete the pod
Oct  2 23:10:42.776: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4zkp to disappear
Oct  2 23:10:43.025: INFO: Pod pod-subpath-test-preprovisionedpv-4zkp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4zkp
Oct  2 23:10:43.025: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4zkp" in namespace "provisioning-7372"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":74,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct  2 23:10:41.108: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  2 23:10:41.108: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-b877
STEP: Creating a pod to test subpath
Oct  2 23:10:41.355: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-b877" in namespace "provisioning-4491" to be "Succeeded or Failed"
Oct  2 23:10:41.600: INFO: Pod "pod-subpath-test-inlinevolume-b877": Phase="Pending", Reason="", readiness=false. Elapsed: 244.884525ms
Oct  2 23:10:43.846: INFO: Pod "pod-subpath-test-inlinevolume-b877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490284024s
Oct  2 23:10:46.093: INFO: Pod "pod-subpath-test-inlinevolume-b877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.737570502s
STEP: Saw pod success
Oct  2 23:10:46.093: INFO: Pod "pod-subpath-test-inlinevolume-b877" satisfied condition "Succeeded or Failed"
Oct  2 23:10:46.339: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-b877 container test-container-volume-inlinevolume-b877: <nil>
STEP: delete the pod
Oct  2 23:10:46.844: INFO: Waiting for pod pod-subpath-test-inlinevolume-b877 to disappear
Oct  2 23:10:47.089: INFO: Pod pod-subpath-test-inlinevolume-b877 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-b877
Oct  2 23:10:47.089: INFO: Deleting pod "pod-subpath-test-inlinevolume-b877" in namespace "provisioning-4491"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:48.095: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 364 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":8,"skipped":80,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:49.111: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:10:48.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8339" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":10,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 156 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":9,"skipped":55,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:49.682: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":59,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:43.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl diff
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:882
    should check if kubectl diff finds a difference for Deployments [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":9,"skipped":59,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:49.770: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":20,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:49.917: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 66 lines ...
Oct  2 23:10:33.617: INFO: PersistentVolumeClaim pvc-f8wjs found but phase is Pending instead of Bound.
Oct  2 23:10:35.869: INFO: PersistentVolumeClaim pvc-f8wjs found and phase=Bound (2.495814293s)
Oct  2 23:10:35.869: INFO: Waiting up to 3m0s for PersistentVolume local-z4twl to have phase Bound
Oct  2 23:10:36.125: INFO: PersistentVolume local-z4twl found and phase=Bound (255.674013ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mkgq
STEP: Creating a pod to test subpath
Oct  2 23:10:36.862: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mkgq" in namespace "provisioning-981" to be "Succeeded or Failed"
Oct  2 23:10:37.106: INFO: Pod "pod-subpath-test-preprovisionedpv-mkgq": Phase="Pending", Reason="", readiness=false. Elapsed: 244.229675ms
Oct  2 23:10:39.354: INFO: Pod "pod-subpath-test-preprovisionedpv-mkgq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492595023s
Oct  2 23:10:41.600: INFO: Pod "pod-subpath-test-preprovisionedpv-mkgq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.738350071s
STEP: Saw pod success
Oct  2 23:10:41.600: INFO: Pod "pod-subpath-test-preprovisionedpv-mkgq" satisfied condition "Succeeded or Failed"
Oct  2 23:10:41.855: INFO: Trying to get logs from node ip-172-20-40-74.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-mkgq container test-container-volume-preprovisionedpv-mkgq: <nil>
STEP: delete the pod
Oct  2 23:10:42.372: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mkgq to disappear
Oct  2 23:10:42.616: INFO: Pod pod-subpath-test-preprovisionedpv-mkgq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mkgq
Oct  2 23:10:42.617: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mkgq" in namespace "provisioning-981"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:50.673: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":6,"skipped":55,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:54.647: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Oct  2 23:10:50.664: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5834" to be "Succeeded or Failed"
Oct  2 23:10:50.915: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 250.886684ms
Oct  2 23:10:53.167: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.502657084s
STEP: Saw pod success
Oct  2 23:10:53.167: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  2 23:10:53.417: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct  2 23:10:53.927: INFO: Waiting for pod pod-host-path-test to disappear
Oct  2 23:10:54.181: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.527 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":11,"skipped":80,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:54.692: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 25 lines ...
STEP: Destroying namespace "services-5473" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:55.643: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":7,"skipped":55,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:49.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-c83504c9-a258-49bf-b8df-38f821bea1e4
STEP: Creating a pod to test consume configMaps
Oct  2 23:10:51.173: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cda3d056-0390-4df3-81cb-25d70d79bf7e" in namespace "projected-7298" to be "Succeeded or Failed"
Oct  2 23:10:51.417: INFO: Pod "pod-projected-configmaps-cda3d056-0390-4df3-81cb-25d70d79bf7e": Phase="Pending", Reason="", readiness=false. Elapsed: 244.393976ms
Oct  2 23:10:53.663: INFO: Pod "pod-projected-configmaps-cda3d056-0390-4df3-81cb-25d70d79bf7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490008715s
Oct  2 23:10:55.908: INFO: Pod "pod-projected-configmaps-cda3d056-0390-4df3-81cb-25d70d79bf7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.734904424s
STEP: Saw pod success
Oct  2 23:10:55.908: INFO: Pod "pod-projected-configmaps-cda3d056-0390-4df3-81cb-25d70d79bf7e" satisfied condition "Succeeded or Failed"
Oct  2 23:10:56.154: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-projected-configmaps-cda3d056-0390-4df3-81cb-25d70d79bf7e container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:10:56.661: INFO: Waiting for pod pod-projected-configmaps-cda3d056-0390-4df3-81cb-25d70d79bf7e to disappear
Oct  2 23:10:56.906: INFO: Pod pod-projected-configmaps-cda3d056-0390-4df3-81cb-25d70d79bf7e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.947 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":6,"skipped":7,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:56.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
STEP: Destroying namespace "services-5988" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":7,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:10:59.090: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 99 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-83e653b7-0fb3-4393-ba9d-2a9cc5ceaa42
STEP: Creating a pod to test consume configMaps
Oct  2 23:10:57.410: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f23c919-047e-4f14-83bb-e73e0eef633d" in namespace "configmap-7737" to be "Succeeded or Failed"
Oct  2 23:10:57.655: INFO: Pod "pod-configmaps-8f23c919-047e-4f14-83bb-e73e0eef633d": Phase="Pending", Reason="", readiness=false. Elapsed: 244.828544ms
Oct  2 23:10:59.902: INFO: Pod "pod-configmaps-8f23c919-047e-4f14-83bb-e73e0eef633d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.491521224s
STEP: Saw pod success
Oct  2 23:10:59.902: INFO: Pod "pod-configmaps-8f23c919-047e-4f14-83bb-e73e0eef633d" satisfied condition "Succeeded or Failed"
Oct  2 23:11:00.154: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-configmaps-8f23c919-047e-4f14-83bb-e73e0eef633d container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:11:00.655: INFO: Waiting for pod pod-configmaps-8f23c919-047e-4f14-83bb-e73e0eef633d to disappear
Oct  2 23:11:00.899: INFO: Pod pod-configmaps-8f23c919-047e-4f14-83bb-e73e0eef633d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.697 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":37,"failed":0}

SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":5,"skipped":38,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:20.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
• [SLOW TEST:41.362 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete pods when suspended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:111
------------------------------
{"msg":"PASSED [sig-apps] Job should delete pods when suspended","total":-1,"completed":6,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:01.955: INFO: Driver "local" does not provide raw block - skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-6bd1fbc4-6a80-4f2e-9fd6-b1dd7428a576
STEP: Creating a pod to test consume secrets
Oct  2 23:10:56.417: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c30a0fa4-9cec-4d1c-af76-25271fbfb6b9" in namespace "projected-4405" to be "Succeeded or Failed"
Oct  2 23:10:56.658: INFO: Pod "pod-projected-secrets-c30a0fa4-9cec-4d1c-af76-25271fbfb6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 240.802944ms
Oct  2 23:10:58.899: INFO: Pod "pod-projected-secrets-c30a0fa4-9cec-4d1c-af76-25271fbfb6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482240855s
Oct  2 23:11:01.142: INFO: Pod "pod-projected-secrets-c30a0fa4-9cec-4d1c-af76-25271fbfb6b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.724651265s
STEP: Saw pod success
Oct  2 23:11:01.142: INFO: Pod "pod-projected-secrets-c30a0fa4-9cec-4d1c-af76-25271fbfb6b9" satisfied condition "Succeeded or Failed"
Oct  2 23:11:01.385: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-projected-secrets-c30a0fa4-9cec-4d1c-af76-25271fbfb6b9 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  2 23:11:01.875: INFO: Waiting for pod pod-projected-secrets-c30a0fa4-9cec-4d1c-af76-25271fbfb6b9 to disappear
Oct  2 23:11:02.116: INFO: Pod pod-projected-secrets-c30a0fa4-9cec-4d1c-af76-25271fbfb6b9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.881 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":69,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:02.626: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 113 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should validate Statefulset Status endpoints [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":6,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:11:02.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-f5fba326-9a21-47b6-9e20-fd465dd42889
STEP: Creating a pod to test consume secrets
Oct  2 23:11:03.683: INFO: Waiting up to 5m0s for pod "pod-secrets-ee5c7a71-1859-4ea7-8dba-a3cd9b83bda3" in namespace "secrets-5094" to be "Succeeded or Failed"
Oct  2 23:11:03.923: INFO: Pod "pod-secrets-ee5c7a71-1859-4ea7-8dba-a3cd9b83bda3": Phase="Pending", Reason="", readiness=false. Elapsed: 239.274436ms
Oct  2 23:11:06.161: INFO: Pod "pod-secrets-ee5c7a71-1859-4ea7-8dba-a3cd9b83bda3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.477646446s
STEP: Saw pod success
Oct  2 23:11:06.161: INFO: Pod "pod-secrets-ee5c7a71-1859-4ea7-8dba-a3cd9b83bda3" satisfied condition "Succeeded or Failed"
Oct  2 23:11:06.401: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-secrets-ee5c7a71-1859-4ea7-8dba-a3cd9b83bda3 container secret-volume-test: <nil>
STEP: delete the pod
Oct  2 23:11:06.885: INFO: Waiting for pod pod-secrets-ee5c7a71-1859-4ea7-8dba-a3cd9b83bda3 to disappear
Oct  2 23:11:07.155: INFO: Pod pod-secrets-ee5c7a71-1859-4ea7-8dba-a3cd9b83bda3 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.625 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:59.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-0333833c-0495-403e-9afe-01c187057c66
STEP: Creating a pod to test consume configMaps
Oct  2 23:11:01.598: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d26e637c-d868-4874-9a16-ef28701ddae3" in namespace "projected-4078" to be "Succeeded or Failed"
Oct  2 23:11:01.846: INFO: Pod "pod-projected-configmaps-d26e637c-d868-4874-9a16-ef28701ddae3": Phase="Pending", Reason="", readiness=false. Elapsed: 247.743476ms
Oct  2 23:11:04.094: INFO: Pod "pod-projected-configmaps-d26e637c-d868-4874-9a16-ef28701ddae3": Phase="Running", Reason="", readiness=true. Elapsed: 2.496027846s
Oct  2 23:11:06.344: INFO: Pod "pod-projected-configmaps-d26e637c-d868-4874-9a16-ef28701ddae3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.745199956s
STEP: Saw pod success
Oct  2 23:11:06.344: INFO: Pod "pod-projected-configmaps-d26e637c-d868-4874-9a16-ef28701ddae3" satisfied condition "Succeeded or Failed"
Oct  2 23:11:06.591: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-projected-configmaps-d26e637c-d868-4874-9a16-ef28701ddae3 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:11:07.139: INFO: Waiting for pod pod-projected-configmaps-d26e637c-d868-4874-9a16-ef28701ddae3 to disappear
Oct  2 23:11:07.386: INFO: Pod pod-projected-configmaps-d26e637c-d868-4874-9a16-ef28701ddae3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.022 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":10,"skipped":82,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:07.925: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:11:10.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7158" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":11,"skipped":122,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":8,"skipped":101,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:09:44.678: INFO: >>> kubeConfig: /root/.kube/config
... skipping 52 lines ...
Oct  2 23:10:56.960: INFO: Waiting for pod aws-client to disappear
Oct  2 23:10:57.201: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Oct  2 23:10:57.202: INFO: Deleting PersistentVolumeClaim "pvc-h8mc6"
Oct  2 23:10:57.444: INFO: Deleting PersistentVolume "aws-k6tpb"
Oct  2 23:10:58.827: INFO: Couldn't delete PD "aws://ap-south-1a/vol-09aec3914bed39aa4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09aec3914bed39aa4 is currently attached to i-049e8578446ca957f
	status code: 400, request id: 6fe3b246-c237-47fc-b943-cb57a67a52bc
Oct  2 23:11:04.996: INFO: Couldn't delete PD "aws://ap-south-1a/vol-09aec3914bed39aa4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09aec3914bed39aa4 is currently attached to i-049e8578446ca957f
	status code: 400, request id: 5e0b5777-15ae-4f3e-b0d5-eeb1a0f8b0d0
Oct  2 23:11:11.105: INFO: Successfully deleted PD "aws://ap-south-1a/vol-09aec3914bed39aa4".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:11:11.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4581" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":9,"skipped":101,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
... skipping 96 lines ...
Oct  2 23:11:08.688: INFO: Creating a PV followed by a PVC
Oct  2 23:11:09.174: INFO: Waiting for PV local-pvqr2cd to bind to PVC pvc-hvgtw
Oct  2 23:11:09.174: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hvgtw] to have phase Bound
Oct  2 23:11:09.415: INFO: PersistentVolumeClaim pvc-hvgtw found and phase=Bound (240.510065ms)
Oct  2 23:11:09.415: INFO: Waiting up to 3m0s for PersistentVolume local-pvqr2cd to have phase Bound
Oct  2 23:11:09.659: INFO: PersistentVolume local-pvqr2cd found and phase=Bound (244.468315ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
Oct  2 23:11:10.385: INFO: Waiting up to 5m0s for pod "pod-68b3c25f-f410-491b-a3b0-9919fb36b8e6" in namespace "persistent-local-volumes-test-4870" to be "Unschedulable"
Oct  2 23:11:10.626: INFO: Pod "pod-68b3c25f-f410-491b-a3b0-9919fb36b8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 240.918265ms
Oct  2 23:11:10.627: INFO: Pod "pod-68b3c25f-f410-491b-a3b0-9919fb36b8e6" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:10.616 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":8,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:25.549 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":10,"skipped":62,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:15.411: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:11:16.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-160" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":9,"skipped":88,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct  2 23:11:03.944: INFO: PersistentVolumeClaim pvc-wxwcg found but phase is Pending instead of Bound.
Oct  2 23:11:06.191: INFO: PersistentVolumeClaim pvc-wxwcg found and phase=Bound (9.240439414s)
Oct  2 23:11:06.191: INFO: Waiting up to 3m0s for PersistentVolume local-zdq44 to have phase Bound
Oct  2 23:11:06.439: INFO: PersistentVolume local-zdq44 found and phase=Bound (247.226705ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lb2t
STEP: Creating a pod to test subpath
Oct  2 23:11:07.195: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lb2t" in namespace "provisioning-4161" to be "Succeeded or Failed"
Oct  2 23:11:07.444: INFO: Pod "pod-subpath-test-preprovisionedpv-lb2t": Phase="Pending", Reason="", readiness=false. Elapsed: 249.601475ms
Oct  2 23:11:09.689: INFO: Pod "pod-subpath-test-preprovisionedpv-lb2t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493765135s
Oct  2 23:11:11.933: INFO: Pod "pod-subpath-test-preprovisionedpv-lb2t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.737965256s
STEP: Saw pod success
Oct  2 23:11:11.933: INFO: Pod "pod-subpath-test-preprovisionedpv-lb2t" satisfied condition "Succeeded or Failed"
Oct  2 23:11:12.176: INFO: Trying to get logs from node ip-172-20-40-74.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-lb2t container test-container-subpath-preprovisionedpv-lb2t: <nil>
STEP: delete the pod
Oct  2 23:11:12.704: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lb2t to disappear
Oct  2 23:11:12.946: INFO: Pod pod-subpath-test-preprovisionedpv-lb2t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lb2t
Oct  2 23:11:12.946: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lb2t" in namespace "provisioning-4161"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":91,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:19.373: INFO: Driver "local" does not provide raw block - skipping
... skipping 221 lines ...
Oct  2 23:11:06.867: INFO: PersistentVolumeClaim pvc-lpxpn found and phase=Bound (9.193642987s)
Oct  2 23:11:06.867: INFO: Waiting up to 3m0s for PersistentVolume nfs-6btgq to have phase Bound
Oct  2 23:11:07.133: INFO: PersistentVolume nfs-6btgq found and phase=Bound (265.878493ms)
STEP: Checking pod has write access to PersistentVolume
Oct  2 23:11:07.613: INFO: Creating nfs test pod
Oct  2 23:11:07.853: INFO: Pod should terminate with exitcode 0 (success)
Oct  2 23:11:07.853: INFO: Waiting up to 5m0s for pod "pvc-tester-jx6mx" in namespace "pv-3207" to be "Succeeded or Failed"
Oct  2 23:11:08.094: INFO: Pod "pvc-tester-jx6mx": Phase="Pending", Reason="", readiness=false. Elapsed: 241.349535ms
Oct  2 23:11:10.333: INFO: Pod "pvc-tester-jx6mx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.480417545s
STEP: Saw pod success
Oct  2 23:11:10.334: INFO: Pod "pvc-tester-jx6mx" satisfied condition "Succeeded or Failed"
Oct  2 23:11:10.334: INFO: Pod pvc-tester-jx6mx succeeded 
Oct  2 23:11:10.334: INFO: Deleting pod "pvc-tester-jx6mx" in namespace "pv-3207"
Oct  2 23:11:10.576: INFO: Wait up to 5m0s for pod "pvc-tester-jx6mx" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  2 23:11:10.815: INFO: Deleting PVC pvc-lpxpn to trigger reclamation of PV 
Oct  2 23:11:10.815: INFO: Deleting PersistentVolumeClaim "pvc-lpxpn"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and a pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:187
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":3,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:25.217: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":55,"failed":0}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:57.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
• [SLOW TEST:29.871 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:206
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":9,"skipped":55,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:27.331: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":10,"skipped":106,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:11:19.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct  2 23:11:21.248: INFO: Waiting up to 5m0s for pod "security-context-16bcd3c5-a8c6-407a-8968-8e3612ba2181" in namespace "security-context-7032" to be "Succeeded or Failed"
Oct  2 23:11:21.489: INFO: Pod "security-context-16bcd3c5-a8c6-407a-8968-8e3612ba2181": Phase="Pending", Reason="", readiness=false. Elapsed: 241.329125ms
Oct  2 23:11:23.733: INFO: Pod "security-context-16bcd3c5-a8c6-407a-8968-8e3612ba2181": Phase="Pending", Reason="", readiness=false. Elapsed: 2.484761357s
Oct  2 23:11:25.975: INFO: Pod "security-context-16bcd3c5-a8c6-407a-8968-8e3612ba2181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.727097148s
STEP: Saw pod success
Oct  2 23:11:25.975: INFO: Pod "security-context-16bcd3c5-a8c6-407a-8968-8e3612ba2181" satisfied condition "Succeeded or Failed"
Oct  2 23:11:26.217: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod security-context-16bcd3c5-a8c6-407a-8968-8e3612ba2181 container test-container: <nil>
STEP: delete the pod
Oct  2 23:11:26.708: INFO: Waiting for pod security-context-16bcd3c5-a8c6-407a-8968-8e3612ba2181 to disappear
Oct  2 23:11:26.950: INFO: Pod security-context-16bcd3c5-a8c6-407a-8968-8e3612ba2181 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.640 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":11,"skipped":106,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:27.462: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 109 lines ...
Oct  2 23:11:19.055: INFO: PersistentVolumeClaim pvc-ltqmt found but phase is Pending instead of Bound.
Oct  2 23:11:21.301: INFO: PersistentVolumeClaim pvc-ltqmt found and phase=Bound (15.966501567s)
Oct  2 23:11:21.301: INFO: Waiting up to 3m0s for PersistentVolume local-xxztc to have phase Bound
Oct  2 23:11:21.546: INFO: PersistentVolume local-xxztc found and phase=Bound (244.807625ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4stf
STEP: Creating a pod to test subpath
Oct  2 23:11:22.281: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4stf" in namespace "provisioning-3493" to be "Succeeded or Failed"
Oct  2 23:11:22.526: INFO: Pod "pod-subpath-test-preprovisionedpv-4stf": Phase="Pending", Reason="", readiness=false. Elapsed: 245.022524ms
Oct  2 23:11:24.771: INFO: Pod "pod-subpath-test-preprovisionedpv-4stf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490035865s
Oct  2 23:11:27.017: INFO: Pod "pod-subpath-test-preprovisionedpv-4stf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.735621146s
STEP: Saw pod success
Oct  2 23:11:27.017: INFO: Pod "pod-subpath-test-preprovisionedpv-4stf" satisfied condition "Succeeded or Failed"
Oct  2 23:11:27.261: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-4stf container test-container-subpath-preprovisionedpv-4stf: <nil>
STEP: delete the pod
Oct  2 23:11:27.763: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4stf to disappear
Oct  2 23:11:28.008: INFO: Pod pod-subpath-test-preprovisionedpv-4stf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4stf
Oct  2 23:11:28.008: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4stf" in namespace "provisioning-3493"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Oct  2 23:11:19.159: INFO: PersistentVolumeClaim pvc-pjjf9 found but phase is Pending instead of Bound.
Oct  2 23:11:21.404: INFO: PersistentVolumeClaim pvc-pjjf9 found and phase=Bound (13.718343898s)
Oct  2 23:11:21.404: INFO: Waiting up to 3m0s for PersistentVolume local-4rv9j to have phase Bound
Oct  2 23:11:21.649: INFO: PersistentVolume local-4rv9j found and phase=Bound (244.257335ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-w5sp
STEP: Creating a pod to test subpath
Oct  2 23:11:22.383: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w5sp" in namespace "provisioning-3575" to be "Succeeded or Failed"
Oct  2 23:11:22.627: INFO: Pod "pod-subpath-test-preprovisionedpv-w5sp": Phase="Pending", Reason="", readiness=false. Elapsed: 244.329535ms
Oct  2 23:11:24.883: INFO: Pod "pod-subpath-test-preprovisionedpv-w5sp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500663736s
Oct  2 23:11:27.130: INFO: Pod "pod-subpath-test-preprovisionedpv-w5sp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.747262856s
STEP: Saw pod success
Oct  2 23:11:27.130: INFO: Pod "pod-subpath-test-preprovisionedpv-w5sp" satisfied condition "Succeeded or Failed"
Oct  2 23:11:27.375: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-w5sp container test-container-subpath-preprovisionedpv-w5sp: <nil>
STEP: delete the pod
Oct  2 23:11:27.876: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w5sp to disappear
Oct  2 23:11:28.120: INFO: Pod pod-subpath-test-preprovisionedpv-w5sp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-w5sp
Oct  2 23:11:28.120: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-w5sp" in namespace "provisioning-3575"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":7,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:11:33.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5121" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":9,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:12.321 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:1423
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.","total":-1,"completed":4,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:37.604: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 110 lines ...
• [SLOW TEST:249.736 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:38.225: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-4539
STEP: Deleting pod verify-service-up-exec-pod-7pmhg in namespace services-4539
STEP: verifying service-disabled is not up
Oct  2 23:11:02.637: INFO: Creating new host exec pod
Oct  2 23:11:03.127: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:11:05.373: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct  2 23:11:05.373: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4539 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.168.175:80 && echo service-down-failed'
Oct  2 23:11:09.760: INFO: rc: 28
Oct  2 23:11:09.761: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.168.175:80 && echo service-down-failed" in pod services-4539/verify-service-down-host-exec-pod: error running /tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4539 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.168.175:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.65.168.175:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4539
STEP: adding service-proxy-name label
STEP: verifying service is not up
Oct  2 23:11:10.512: INFO: Creating new host exec pod
Oct  2 23:11:11.012: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:11:13.258: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct  2 23:11:13.259: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4539 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.222.81:80 && echo service-down-failed'
Oct  2 23:11:17.593: INFO: rc: 28
Oct  2 23:11:17.594: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.71.222.81:80 && echo service-down-failed" in pod services-4539/verify-service-down-host-exec-pod: error running /tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4539 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.222.81:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.71.222.81:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4539
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Oct  2 23:11:18.337: INFO: Creating new host exec pod
... skipping 12 lines ...
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-4539
STEP: Deleting pod verify-service-up-exec-pod-mv5q6 in namespace services-4539
STEP: verifying service-disabled is still not up
Oct  2 23:11:31.881: INFO: Creating new host exec pod
Oct  2 23:11:32.374: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  2 23:11:34.622: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct  2 23:11:34.622: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4539 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.168.175:80 && echo service-down-failed'
Oct  2 23:11:38.977: INFO: rc: 28
Oct  2 23:11:38.977: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.168.175:80 && echo service-down-failed" in pod services-4539/verify-service-down-host-exec-pod: error running /tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4539 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.168.175:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.65.168.175:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4539
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:11:39.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:66.955 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1886
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":4,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 68 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":10,"skipped":64,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 56 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should adopt matching orphans and release non-matching pods
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:167
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":10,"skipped":62,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":12,"skipped":81,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:11:21.973: INFO: >>> kubeConfig: /root/.kube/config
... skipping 14 lines ...
Oct  2 23:11:33.055: INFO: PersistentVolumeClaim pvc-tvggr found but phase is Pending instead of Bound.
Oct  2 23:11:35.306: INFO: PersistentVolumeClaim pvc-tvggr found and phase=Bound (7.004551787s)
Oct  2 23:11:35.307: INFO: Waiting up to 3m0s for PersistentVolume local-rvcn4 to have phase Bound
Oct  2 23:11:35.556: INFO: PersistentVolume local-rvcn4 found and phase=Bound (249.799025ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rttq
STEP: Creating a pod to test subpath
Oct  2 23:11:36.312: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rttq" in namespace "provisioning-3685" to be "Succeeded or Failed"
Oct  2 23:11:36.562: INFO: Pod "pod-subpath-test-preprovisionedpv-rttq": Phase="Pending", Reason="", readiness=false. Elapsed: 249.820835ms
Oct  2 23:11:38.813: INFO: Pod "pod-subpath-test-preprovisionedpv-rttq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.501666166s
Oct  2 23:11:41.064: INFO: Pod "pod-subpath-test-preprovisionedpv-rttq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.752158957s
STEP: Saw pod success
Oct  2 23:11:41.064: INFO: Pod "pod-subpath-test-preprovisionedpv-rttq" satisfied condition "Succeeded or Failed"
Oct  2 23:11:41.314: INFO: Trying to get logs from node ip-172-20-40-74.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-rttq container test-container-volume-preprovisionedpv-rttq: <nil>
STEP: delete the pod
Oct  2 23:11:41.836: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rttq to disappear
Oct  2 23:11:42.086: INFO: Pod pod-subpath-test-preprovisionedpv-rttq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rttq
Oct  2 23:11:42.086: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rttq" in namespace "provisioning-3685"
... skipping 6 lines ...
Oct  2 23:11:43.088: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-driver-c7a75fd8-5124-4043-b2d1-233725d70dba && umount /tmp/local-driver-c7a75fd8-5124-4043-b2d1-233725d70dba-backend && rm -r /tmp/local-driver-c7a75fd8-5124-4043-b2d1-233725d70dba-backend] Namespace:provisioning-3685 PodName:hostexec-ip-172-20-40-74.ap-south-1.compute.internal-qf5s2 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  2 23:11:43.088: INFO: >>> kubeConfig: /root/.kube/config
Oct  2 23:11:44.640: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: command:   rm /tmp/local-driver-c7a75fd8-5124-4043-b2d1-233725d70dba && umount /tmp/local-driver-c7a75fd8-5124-4043-b2d1-233725d70dba-backend && rm -r /tmp/local-driver-c7a75fd8-5124-4043-b2d1-233725d70dba-backend
Oct  2 23:11:44.640: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: stdout:    ""
Oct  2 23:11:44.640: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: stderr:    "rm: cannot remove '/tmp/local-driver-c7a75fd8-5124-4043-b2d1-233725d70dba-backend': Device or resource busy\n"
Oct  2 23:11:44.640: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: exit code: 0
Oct  2 23:11:44.640: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "command terminated with exit code 1",
        },
        Code: 1,
    }
... skipping 108 lines ...
Oct  2 23:11:48.266: INFO: 	Container driver-registrar ready: true, restart count 0
Oct  2 23:11:48.266: INFO: 	Container mock ready: true, restart count 0
Oct  2 23:11:48.266: INFO: test-container-pod started at 2021-10-02 23:10:52 +0000 UTC (0+1 container statuses recorded)
Oct  2 23:11:48.266: INFO: 	Container webserver ready: false, restart count 0
Oct  2 23:11:48.266: INFO: csi-mockplugin-resizer-0 started at 2021-10-02 23:11:15 +0000 UTC (0+1 container statuses recorded)
Oct  2 23:11:48.266: INFO: 	Container csi-resizer ready: true, restart count 0
Oct  2 23:11:48.266: INFO: failed-jobs-history-limit-27220271--1-rhxln started at 2021-10-02 23:11:00 +0000 UTC (0+1 container statuses recorded)
Oct  2 23:11:48.266: INFO: 	Container c ready: false, restart count 1
Oct  2 23:11:48.266: INFO: kube-proxy-ip-172-20-34-88.ap-south-1.compute.internal started at 2021-10-02 23:03:30 +0000 UTC (0+1 container statuses recorded)
Oct  2 23:11:48.266: INFO: 	Container kube-proxy ready: true, restart count 0
Oct  2 23:11:48.266: INFO: coredns-5dc785954d-g9rdd started at 2021-10-02 23:04:06 +0000 UTC (0+1 container statuses recorded)
Oct  2 23:11:48.266: INFO: 	Container coredns ready: true, restart count 0
Oct  2 23:11:48.266: INFO: service-proxy-toggled-jwmm6 started at 2021-10-02 23:10:41 +0000 UTC (0+1 container statuses recorded)
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205

      Oct  2 23:11:44.640: Unexpected error:
          <exec.CodeExitError>: {
              Err: {
                  s: "command terminated with exit code 1",
              },
              Code: 1,
          }
          command terminated with exit code 1
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:271
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":12,"skipped":81,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:11:54.524: INFO: Only supported for providers [azure] (not aws)
... skipping 141 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:11:53.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7071ddf0-d210-410f-b230-df274d9b30d1" in namespace "downward-api-6944" to be "Succeeded or Failed"
Oct  2 23:11:53.342: INFO: Pod "downwardapi-volume-7071ddf0-d210-410f-b230-df274d9b30d1": Phase="Pending", Reason="", readiness=false. Elapsed: 241.960265ms
Oct  2 23:11:55.585: INFO: Pod "downwardapi-volume-7071ddf0-d210-410f-b230-df274d9b30d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.484885067s
Oct  2 23:11:57.829: INFO: Pod "downwardapi-volume-7071ddf0-d210-410f-b230-df274d9b30d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.728906289s
STEP: Saw pod success
Oct  2 23:11:57.829: INFO: Pod "downwardapi-volume-7071ddf0-d210-410f-b230-df274d9b30d1" satisfied condition "Succeeded or Failed"
Oct  2 23:11:58.070: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod downwardapi-volume-7071ddf0-d210-410f-b230-df274d9b30d1 container client-container: <nil>
STEP: delete the pod
Oct  2 23:11:58.562: INFO: Waiting for pod downwardapi-volume-7071ddf0-d210-410f-b230-df274d9b30d1 to disappear
Oct  2 23:11:58.803: INFO: Pod downwardapi-volume-7071ddf0-d210-410f-b230-df274d9b30d1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.651 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":66,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 121 lines ...
Oct  2 23:11:49.897: INFO: PersistentVolumeClaim pvc-74mwx found but phase is Pending instead of Bound.
Oct  2 23:11:52.144: INFO: PersistentVolumeClaim pvc-74mwx found and phase=Bound (11.471048515s)
Oct  2 23:11:52.144: INFO: Waiting up to 3m0s for PersistentVolume local-tk2hb to have phase Bound
Oct  2 23:11:52.391: INFO: PersistentVolume local-tk2hb found and phase=Bound (246.783435ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qt8t
STEP: Creating a pod to test subpath
Oct  2 23:11:53.125: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qt8t" in namespace "provisioning-2879" to be "Succeeded or Failed"
Oct  2 23:11:53.369: INFO: Pod "pod-subpath-test-preprovisionedpv-qt8t": Phase="Pending", Reason="", readiness=false. Elapsed: 244.340865ms
Oct  2 23:11:55.614: INFO: Pod "pod-subpath-test-preprovisionedpv-qt8t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.488988917s
STEP: Saw pod success
Oct  2 23:11:55.614: INFO: Pod "pod-subpath-test-preprovisionedpv-qt8t" satisfied condition "Succeeded or Failed"
Oct  2 23:11:55.858: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-qt8t container test-container-volume-preprovisionedpv-qt8t: <nil>
STEP: delete the pod
Oct  2 23:11:56.354: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qt8t to disappear
Oct  2 23:11:56.602: INFO: Pod pod-subpath-test-preprovisionedpv-qt8t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qt8t
Oct  2 23:11:56.602: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qt8t" in namespace "provisioning-2879"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:04.558: INFO: Only supported for providers [azure] (not aws)
... skipping 87 lines ...
• [SLOW TEST:248.019 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted by liveness probe because startup probe delays it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:348
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":4,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:05.247: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 89 lines ...
Oct  2 23:11:54.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct  2 23:11:56.070: INFO: Waiting up to 5m0s for pod "pod-5546ab14-2c91-48fd-be60-b9344cfad60b" in namespace "emptydir-2920" to be "Succeeded or Failed"
Oct  2 23:11:56.320: INFO: Pod "pod-5546ab14-2c91-48fd-be60-b9344cfad60b": Phase="Pending", Reason="", readiness=false. Elapsed: 249.974885ms
Oct  2 23:11:58.570: INFO: Pod "pod-5546ab14-2c91-48fd-be60-b9344cfad60b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500246988s
Oct  2 23:12:00.823: INFO: Pod "pod-5546ab14-2c91-48fd-be60-b9344cfad60b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75244406s
Oct  2 23:12:03.074: INFO: Pod "pod-5546ab14-2c91-48fd-be60-b9344cfad60b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.003943053s
Oct  2 23:12:05.324: INFO: Pod "pod-5546ab14-2c91-48fd-be60-b9344cfad60b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.254227124s
STEP: Saw pod success
Oct  2 23:12:05.324: INFO: Pod "pod-5546ab14-2c91-48fd-be60-b9344cfad60b" satisfied condition "Succeeded or Failed"
Oct  2 23:12:05.574: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-5546ab14-2c91-48fd-be60-b9344cfad60b container test-container: <nil>
STEP: delete the pod
Oct  2 23:12:06.088: INFO: Waiting for pod pod-5546ab14-2c91-48fd-be60-b9344cfad60b to disappear
Oct  2 23:12:06.342: INFO: Pod pod-5546ab14-2c91-48fd-be60-b9344cfad60b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.283 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":89,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:06.884: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 154 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":7,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:07.843: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 148 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:12:08.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-3450" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":9,"skipped":67,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:10:07.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-34" for this suite.


• [SLOW TEST:125.423 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":7,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:12:06.789: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ce18997-1fe0-4f1f-9318-119ec758b8b9" in namespace "downward-api-8018" to be "Succeeded or Failed"
Oct  2 23:12:07.040: INFO: Pod "downwardapi-volume-8ce18997-1fe0-4f1f-9318-119ec758b8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 250.409085ms
Oct  2 23:12:09.291: INFO: Pod "downwardapi-volume-8ce18997-1fe0-4f1f-9318-119ec758b8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502285256s
Oct  2 23:12:11.543: INFO: Pod "downwardapi-volume-8ce18997-1fe0-4f1f-9318-119ec758b8b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.75402675s
STEP: Saw pod success
Oct  2 23:12:11.543: INFO: Pod "downwardapi-volume-8ce18997-1fe0-4f1f-9318-119ec758b8b9" satisfied condition "Succeeded or Failed"
Oct  2 23:12:11.794: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod downwardapi-volume-8ce18997-1fe0-4f1f-9318-119ec758b8b9 container client-container: <nil>
STEP: delete the pod
Oct  2 23:12:12.301: INFO: Waiting for pod downwardapi-volume-8ce18997-1fe0-4f1f-9318-119ec758b8b9 to disappear
Oct  2 23:12:12.551: INFO: Pod downwardapi-volume-8ce18997-1fe0-4f1f-9318-119ec758b8b9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":17,"failed":0}

SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:12:13.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-fc5365a8-c4ce-4c78-81e7-24e17a6f8c95
STEP: Creating a pod to test consume configMaps
Oct  2 23:12:14.822: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c16d02d-5c7c-42ce-8a06-225fc6b3caca" in namespace "configmap-5748" to be "Succeeded or Failed"
Oct  2 23:12:15.073: INFO: Pod "pod-configmaps-6c16d02d-5c7c-42ce-8a06-225fc6b3caca": Phase="Pending", Reason="", readiness=false. Elapsed: 250.427915ms
Oct  2 23:12:17.325: INFO: Pod "pod-configmaps-6c16d02d-5c7c-42ce-8a06-225fc6b3caca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.502504818s
STEP: Saw pod success
Oct  2 23:12:17.325: INFO: Pod "pod-configmaps-6c16d02d-5c7c-42ce-8a06-225fc6b3caca" satisfied condition "Succeeded or Failed"
Oct  2 23:12:17.576: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-configmaps-6c16d02d-5c7c-42ce-8a06-225fc6b3caca container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:12:18.085: INFO: Waiting for pod pod-configmaps-6c16d02d-5c7c-42ce-8a06-225fc6b3caca to disappear
Oct  2 23:12:18.335: INFO: Pod pod-configmaps-6c16d02d-5c7c-42ce-8a06-225fc6b3caca no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.773 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:18.867: INFO: Only supported for providers [openstack] (not aws)
... skipping 48 lines ...
• [SLOW TEST:8.922 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":10,"skipped":75,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":130,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:14.514 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:530
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":14,"skipped":95,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:21.460: INFO: Only supported for providers [azure] (not aws)
... skipping 58 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":12,"skipped":69,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:12:01.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:12:21.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e5adc77-5076-46fc-8390-8975ca8a6e3b" in namespace "projected-1784" to be "Succeeded or Failed"
Oct  2 23:12:21.378: INFO: Pod "downwardapi-volume-4e5adc77-5076-46fc-8390-8975ca8a6e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 241.986635ms
Oct  2 23:12:23.620: INFO: Pod "downwardapi-volume-4e5adc77-5076-46fc-8390-8975ca8a6e3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.484189679s
STEP: Saw pod success
Oct  2 23:12:23.620: INFO: Pod "downwardapi-volume-4e5adc77-5076-46fc-8390-8975ca8a6e3b" satisfied condition "Succeeded or Failed"
Oct  2 23:12:23.862: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod downwardapi-volume-4e5adc77-5076-46fc-8390-8975ca8a6e3b container client-container: <nil>
STEP: delete the pod
Oct  2 23:12:24.360: INFO: Waiting for pod downwardapi-volume-4e5adc77-5076-46fc-8390-8975ca8a6e3b to disappear
Oct  2 23:12:24.602: INFO: Pod downwardapi-volume-4e5adc77-5076-46fc-8390-8975ca8a6e3b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.402 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":133,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:25.118: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 7 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:12:20.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-839da7b2-8190-4a00-b89b-ff1000fa8826" in namespace "projected-959" to be "Succeeded or Failed"
Oct  2 23:12:20.704: INFO: Pod "downwardapi-volume-839da7b2-8190-4a00-b89b-ff1000fa8826": Phase="Pending", Reason="", readiness=false. Elapsed: 263.364704ms
Oct  2 23:12:22.955: INFO: Pod "downwardapi-volume-839da7b2-8190-4a00-b89b-ff1000fa8826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.514460577s
Oct  2 23:12:25.207: INFO: Pod "downwardapi-volume-839da7b2-8190-4a00-b89b-ff1000fa8826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.76611058s
STEP: Saw pod success
Oct  2 23:12:25.207: INFO: Pod "downwardapi-volume-839da7b2-8190-4a00-b89b-ff1000fa8826" satisfied condition "Succeeded or Failed"
Oct  2 23:12:25.457: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod downwardapi-volume-839da7b2-8190-4a00-b89b-ff1000fa8826 container client-container: <nil>
STEP: delete the pod
Oct  2 23:12:25.987: INFO: Waiting for pod downwardapi-volume-839da7b2-8190-4a00-b89b-ff1000fa8826 to disappear
Oct  2 23:12:26.238: INFO: Pod downwardapi-volume-839da7b2-8190-4a00-b89b-ff1000fa8826 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.813 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:26.783: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 97 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 142 lines ...
Oct  2 23:12:03.093: INFO: PersistentVolumeClaim pvc-9nlnc found but phase is Pending instead of Bound.
Oct  2 23:12:05.339: INFO: PersistentVolumeClaim pvc-9nlnc found and phase=Bound (11.476856696s)
Oct  2 23:12:05.339: INFO: Waiting up to 3m0s for PersistentVolume local-5mdb4 to have phase Bound
Oct  2 23:12:05.583: INFO: PersistentVolume local-5mdb4 found and phase=Bound (244.427035ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gf2p
STEP: Creating a pod to test subpath
Oct  2 23:12:06.325: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gf2p" in namespace "provisioning-9927" to be "Succeeded or Failed"
Oct  2 23:12:06.570: INFO: Pod "pod-subpath-test-preprovisionedpv-gf2p": Phase="Pending", Reason="", readiness=false. Elapsed: 245.147035ms
Oct  2 23:12:08.816: INFO: Pod "pod-subpath-test-preprovisionedpv-gf2p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491553399s
Oct  2 23:12:11.067: INFO: Pod "pod-subpath-test-preprovisionedpv-gf2p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.742260181s
STEP: Saw pod success
Oct  2 23:12:11.067: INFO: Pod "pod-subpath-test-preprovisionedpv-gf2p" satisfied condition "Succeeded or Failed"
Oct  2 23:12:11.312: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-gf2p container test-container-subpath-preprovisionedpv-gf2p: <nil>
STEP: delete the pod
Oct  2 23:12:11.809: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gf2p to disappear
Oct  2 23:12:12.054: INFO: Pod pod-subpath-test-preprovisionedpv-gf2p no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gf2p
Oct  2 23:12:12.054: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gf2p" in namespace "provisioning-9927"
... skipping 13 lines ...
Oct  2 23:12:17.649: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-dbb1587f-bb35-4694-8b48-aab5ee585221] Namespace:provisioning-9927 PodName:hostexec-ip-172-20-54-138.ap-south-1.compute.internal-tcvvp ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  2 23:12:17.649: INFO: >>> kubeConfig: /root/.kube/config
Oct  2 23:12:19.148: INFO: exec ip-172-20-54-138.ap-south-1.compute.internal: command:   rm -r /tmp/local-driver-dbb1587f-bb35-4694-8b48-aab5ee585221
Oct  2 23:12:19.148: INFO: exec ip-172-20-54-138.ap-south-1.compute.internal: stdout:    ""
Oct  2 23:12:19.148: INFO: exec ip-172-20-54-138.ap-south-1.compute.internal: stderr:    "rm: cannot remove '/tmp/local-driver-dbb1587f-bb35-4694-8b48-aab5ee585221': Device or resource busy\n"
Oct  2 23:12:19.148: INFO: exec ip-172-20-54-138.ap-south-1.compute.internal: exit code: 0
Oct  2 23:12:19.148: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "command terminated with exit code 1",
        },
        Code: 1,
    }
... skipping 277 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380

      Oct  2 23:12:19.148: Unexpected error:
          <exec.CodeExitError>: {
              Err: {
                  s: "command terminated with exit code 1",
              },
              Code: 1,
          }
          command terminated with exit code 1
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:170
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":72,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:28.583: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 51 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Oct  2 23:12:28.402: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7539" to be "Succeeded or Failed"
Oct  2 23:12:28.652: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 250.436604ms
Oct  2 23:12:30.903: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.501559258s
STEP: Saw pod success
Oct  2 23:12:30.903: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  2 23:12:31.155: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Oct  2 23:12:31.664: INFO: Waiting for pod pod-host-path-test to disappear
Oct  2 23:12:31.916: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.528 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":76,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:32.449: INFO: Only supported for providers [gce gke] (not aws)
... skipping 125 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":15,"failed":0}
[BeforeEach] [sig-storage] Mounted volume expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:12:00.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename mounted-volume-expand
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 38 lines ...
• [SLOW TEST:34.618 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":11,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:34.777: INFO: Only supported for providers [openstack] (not aws)
... skipping 148 lines ...
Oct  2 23:11:38.889: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-20l4xw4      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-20    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-20l4xw4,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-20    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-20l4xw4,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-20    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-20l4xw4,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-20l4xw4    de20364a-65f8-4ead-83e4-2b0e93fa30af 12155 0 2021-10-02 23:11:39 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-10-02 23:11:39 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-g4p8c pvc- provisioning-20  d08d2e80-2796-43b7-b6c2-ed666cd31f07 12176 0 2021-10-02 23:11:39 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-10-02 23:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-20l4xw4,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-c5ee4c83-3b70-4e3c-8c23-bd1435c0a952 in namespace provisioning-20
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Oct  2 23:11:55.521: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-79qmc" in namespace "provisioning-20" to be "Succeeded or Failed"
Oct  2 23:11:55.760: INFO: Pod "pvc-volume-tester-writer-79qmc": Phase="Pending", Reason="", readiness=false. Elapsed: 238.554825ms
Oct  2 23:11:57.997: INFO: Pod "pvc-volume-tester-writer-79qmc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476298378s
Oct  2 23:12:00.236: INFO: Pod "pvc-volume-tester-writer-79qmc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715266501s
Oct  2 23:12:02.475: INFO: Pod "pvc-volume-tester-writer-79qmc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953956374s
Oct  2 23:12:04.714: INFO: Pod "pvc-volume-tester-writer-79qmc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.192810387s
STEP: Saw pod success
Oct  2 23:12:04.714: INFO: Pod "pvc-volume-tester-writer-79qmc" satisfied condition "Succeeded or Failed"
Oct  2 23:12:05.190: INFO: Pod pvc-volume-tester-writer-79qmc has the following logs: 
Oct  2 23:12:05.190: INFO: Deleting pod "pvc-volume-tester-writer-79qmc" in namespace "provisioning-20"
Oct  2 23:12:05.442: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-79qmc" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-33-208.ap-south-1.compute.internal"
Oct  2 23:12:06.394: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-dgb9z" in namespace "provisioning-20" to be "Succeeded or Failed"
Oct  2 23:12:06.631: INFO: Pod "pvc-volume-tester-reader-dgb9z": Phase="Pending", Reason="", readiness=false. Elapsed: 237.272795ms
Oct  2 23:12:08.869: INFO: Pod "pvc-volume-tester-reader-dgb9z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475227479s
Oct  2 23:12:11.108: INFO: Pod "pvc-volume-tester-reader-dgb9z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.713880522s
Oct  2 23:12:13.346: INFO: Pod "pvc-volume-tester-reader-dgb9z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.951730434s
Oct  2 23:12:15.586: INFO: Pod "pvc-volume-tester-reader-dgb9z": Phase="Pending", Reason="", readiness=false. Elapsed: 9.191926849s
Oct  2 23:12:17.824: INFO: Pod "pvc-volume-tester-reader-dgb9z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.430564112s
STEP: Saw pod success
Oct  2 23:12:17.824: INFO: Pod "pvc-volume-tester-reader-dgb9z" satisfied condition "Succeeded or Failed"
Oct  2 23:12:18.302: INFO: Pod pvc-volume-tester-reader-dgb9z has the following logs: hello world

Oct  2 23:12:18.302: INFO: Deleting pod "pvc-volume-tester-reader-dgb9z" in namespace "provisioning-20"
Oct  2 23:12:18.545: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-dgb9z" to be fully deleted
Oct  2 23:12:18.783: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-g4p8c] to have phase Bound
Oct  2 23:12:19.020: INFO: PersistentVolumeClaim pvc-g4p8c found and phase=Bound (237.514866ms)
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":5,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:36.653: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 44 lines ...
• [SLOW TEST:12.439 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":14,"skipped":174,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:39.549: INFO: Only supported for providers [azure] (not aws)
... skipping 68 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
Oct  2 23:12:36.018: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  2 23:12:36.018: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-mj8l
STEP: Creating a pod to test exec-volume-test
Oct  2 23:12:36.265: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-mj8l" in namespace "volume-6820" to be "Succeeded or Failed"
Oct  2 23:12:36.510: INFO: Pod "exec-volume-test-inlinevolume-mj8l": Phase="Pending", Reason="", readiness=false. Elapsed: 245.515806ms
Oct  2 23:12:38.756: INFO: Pod "exec-volume-test-inlinevolume-mj8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.49157296s
STEP: Saw pod success
Oct  2 23:12:38.757: INFO: Pod "exec-volume-test-inlinevolume-mj8l" satisfied condition "Succeeded or Failed"
Oct  2 23:12:39.002: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod exec-volume-test-inlinevolume-mj8l container exec-container-inlinevolume-mj8l: <nil>
STEP: delete the pod
Oct  2 23:12:39.498: INFO: Waiting for pod exec-volume-test-inlinevolume-mj8l to disappear
Oct  2 23:12:39.743: INFO: Pod exec-volume-test-inlinevolume-mj8l no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-mj8l
Oct  2 23:12:39.743: INFO: Deleting pod "exec-volume-test-inlinevolume-mj8l" in namespace "volume-6820"
... skipping 41 lines ...
Oct  2 23:12:42.746: INFO: AfterEach: Cleaning up test resources.
Oct  2 23:12:42.746: INFO: pvc is nil
Oct  2 23:12:42.746: INFO: Deleting PersistentVolume "hostpath-8p9tz"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":15,"skipped":178,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:43.009: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 61 lines ...
Oct  2 23:12:33.918: INFO: PersistentVolumeClaim pvc-dzl2m found but phase is Pending instead of Bound.
Oct  2 23:12:36.161: INFO: PersistentVolumeClaim pvc-dzl2m found and phase=Bound (6.973090678s)
Oct  2 23:12:36.162: INFO: Waiting up to 3m0s for PersistentVolume local-hzkr9 to have phase Bound
Oct  2 23:12:36.403: INFO: PersistentVolume local-hzkr9 found and phase=Bound (241.515595ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-2kdd
STEP: Creating a pod to test exec-volume-test
Oct  2 23:12:37.131: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-2kdd" in namespace "volume-5573" to be "Succeeded or Failed"
Oct  2 23:12:37.372: INFO: Pod "exec-volume-test-preprovisionedpv-2kdd": Phase="Pending", Reason="", readiness=false. Elapsed: 241.725926ms
Oct  2 23:12:39.616: INFO: Pod "exec-volume-test-preprovisionedpv-2kdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.48530996s
STEP: Saw pod success
Oct  2 23:12:39.616: INFO: Pod "exec-volume-test-preprovisionedpv-2kdd" satisfied condition "Succeeded or Failed"
Oct  2 23:12:39.931: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-2kdd container exec-container-preprovisionedpv-2kdd: <nil>
STEP: delete the pod
Oct  2 23:12:40.425: INFO: Waiting for pod exec-volume-test-preprovisionedpv-2kdd to disappear
Oct  2 23:12:40.666: INFO: Pod exec-volume-test-preprovisionedpv-2kdd no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-2kdd
Oct  2 23:12:40.666: INFO: Deleting pod "exec-volume-test-preprovisionedpv-2kdd" in namespace "volume-5573"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":14,"skipped":70,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:43.679: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 120 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 120 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":15,"skipped":114,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:48.799: INFO: Only supported for providers [vsphere] (not aws)
... skipping 14 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":12,"skipped":124,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:12:35.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 39 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1233
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":13,"skipped":124,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
Oct  2 23:11:07.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Oct  2 23:11:08.874: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  2 23:11:09.356: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-1678" in namespace "volume-1678" to be "Succeeded or Failed"
Oct  2 23:11:09.630: INFO: Pod "hostpath-symlink-prep-volume-1678": Phase="Pending", Reason="", readiness=false. Elapsed: 273.724753ms
Oct  2 23:11:11.869: INFO: Pod "hostpath-symlink-prep-volume-1678": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.512963053s
STEP: Saw pod success
Oct  2 23:11:11.869: INFO: Pod "hostpath-symlink-prep-volume-1678" satisfied condition "Succeeded or Failed"
Oct  2 23:11:11.870: INFO: Deleting pod "hostpath-symlink-prep-volume-1678" in namespace "volume-1678"
Oct  2 23:11:12.114: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-1678" to be fully deleted
Oct  2 23:11:12.352: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Oct  2 23:11:15.069: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-1678 exec hostpathsymlink-injector --namespace=volume-1678 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-1678' > /opt/0/index.html'
... skipping 98 lines ...
STEP: Deleting pod hostpathsymlink-client in namespace volume-1678
Oct  2 23:12:47.783: INFO: Waiting for pod hostpathsymlink-client to disappear
Oct  2 23:12:48.021: INFO: Pod hostpathsymlink-client still exists
Oct  2 23:12:50.022: INFO: Waiting for pod hostpathsymlink-client to disappear
Oct  2 23:12:50.260: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Oct  2 23:12:50.501: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-1678" in namespace "volume-1678" to be "Succeeded or Failed"
Oct  2 23:12:50.739: INFO: Pod "hostpath-symlink-prep-volume-1678": Phase="Pending", Reason="", readiness=false. Elapsed: 237.898286ms
Oct  2 23:12:52.978: INFO: Pod "hostpath-symlink-prep-volume-1678": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477669122s
Oct  2 23:12:55.217: INFO: Pod "hostpath-symlink-prep-volume-1678": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.716108507s
STEP: Saw pod success
Oct  2 23:12:55.217: INFO: Pod "hostpath-symlink-prep-volume-1678" satisfied condition "Succeeded or Failed"
Oct  2 23:12:55.217: INFO: Deleting pod "hostpath-symlink-prep-volume-1678" in namespace "volume-1678"
Oct  2 23:12:55.461: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-1678" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:12:55.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1678" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":8,"skipped":52,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:12:50.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-680be990-f8cc-40a7-801e-a3261e9cc283" in namespace "downward-api-6899" to be "Succeeded or Failed"
Oct  2 23:12:50.575: INFO: Pod "downwardapi-volume-680be990-f8cc-40a7-801e-a3261e9cc283": Phase="Pending", Reason="", readiness=false. Elapsed: 250.042895ms
Oct  2 23:12:52.829: INFO: Pod "downwardapi-volume-680be990-f8cc-40a7-801e-a3261e9cc283": Phase="Pending", Reason="", readiness=false. Elapsed: 2.50384421s
Oct  2 23:12:55.079: INFO: Pod "downwardapi-volume-680be990-f8cc-40a7-801e-a3261e9cc283": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.754575964s
STEP: Saw pod success
Oct  2 23:12:55.080: INFO: Pod "downwardapi-volume-680be990-f8cc-40a7-801e-a3261e9cc283" satisfied condition "Succeeded or Failed"
Oct  2 23:12:55.329: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod downwardapi-volume-680be990-f8cc-40a7-801e-a3261e9cc283 container client-container: <nil>
STEP: delete the pod
Oct  2 23:12:55.838: INFO: Waiting for pod downwardapi-volume-680be990-f8cc-40a7-801e-a3261e9cc283 to disappear
Oct  2 23:12:56.088: INFO: Pod downwardapi-volume-680be990-f8cc-40a7-801e-a3261e9cc283 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.771 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":118,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:56.614: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":12,"skipped":18,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:12:40.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
• [SLOW TEST:16.209 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1198
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":13,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:56.714: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 103 lines ...
Oct  2 23:12:26.587: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:26.832: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:27.568: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:27.816: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:28.063: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:28.308: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:28.826: INFO: Lookups using dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local]

Oct  2 23:12:34.072: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:34.337: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:34.582: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:34.827: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:35.562: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:35.807: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:36.058: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:36.303: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:36.792: INFO: Lookups using dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local]

Oct  2 23:12:39.072: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:39.319: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:39.564: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:39.832: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:40.567: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:40.812: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:41.057: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:41.302: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:41.792: INFO: Lookups using dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local]

Oct  2 23:12:44.072: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:44.317: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:44.562: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:44.810: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:45.548: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:45.799: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:46.043: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:46.288: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:46.781: INFO: Lookups using dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local]

Oct  2 23:12:49.072: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:49.316: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:49.561: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:49.807: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:50.542: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:50.786: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:51.032: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:51.282: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local from pod dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af: the server could not find the requested resource (get pods dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af)
Oct  2 23:12:51.772: INFO: Lookups using dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3868.svc.cluster.local jessie_udp@dns-test-service-2.dns-3868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3868.svc.cluster.local]

Oct  2 23:12:56.775: INFO: DNS probes using dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:38.380 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":11,"skipped":84,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:57.815: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 116 lines ...
Oct  2 23:12:26.156: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct  2 23:12:26.409: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathvks6r] to have phase Bound
Oct  2 23:12:26.651: INFO: PersistentVolumeClaim csi-hostpathvks6r found but phase is Pending instead of Bound.
Oct  2 23:12:28.937: INFO: PersistentVolumeClaim csi-hostpathvks6r found and phase=Bound (2.528469606s)
STEP: Creating pod pod-subpath-test-dynamicpv-mk2t
STEP: Creating a pod to test subpath
Oct  2 23:12:29.670: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mk2t" in namespace "provisioning-6177" to be "Succeeded or Failed"
Oct  2 23:12:29.913: INFO: Pod "pod-subpath-test-dynamicpv-mk2t": Phase="Pending", Reason="", readiness=false. Elapsed: 242.598955ms
Oct  2 23:12:32.157: INFO: Pod "pod-subpath-test-dynamicpv-mk2t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486960979s
Oct  2 23:12:34.405: INFO: Pod "pod-subpath-test-dynamicpv-mk2t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.734569463s
STEP: Saw pod success
Oct  2 23:12:34.405: INFO: Pod "pod-subpath-test-dynamicpv-mk2t" satisfied condition "Succeeded or Failed"
Oct  2 23:12:34.647: INFO: Trying to get logs from node ip-172-20-40-74.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-mk2t container test-container-volume-dynamicpv-mk2t: <nil>
STEP: delete the pod
Oct  2 23:12:35.145: INFO: Waiting for pod pod-subpath-test-dynamicpv-mk2t to disappear
Oct  2 23:12:35.387: INFO: Pod pod-subpath-test-dynamicpv-mk2t no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-mk2t
Oct  2 23:12:35.387: INFO: Deleting pod "pod-subpath-test-dynamicpv-mk2t" in namespace "provisioning-6177"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:59.514: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":5,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:12:59.658: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:244
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":14,"skipped":125,"failed":0}
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:12:54.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Oct  2 23:12:56.332: INFO: Waiting up to 5m0s for pod "var-expansion-f7afd99b-68e1-4122-9f98-1107501c323c" in namespace "var-expansion-1440" to be "Succeeded or Failed"
Oct  2 23:12:56.573: INFO: Pod "var-expansion-f7afd99b-68e1-4122-9f98-1107501c323c": Phase="Pending", Reason="", readiness=false. Elapsed: 241.546626ms
Oct  2 23:12:58.815: INFO: Pod "var-expansion-f7afd99b-68e1-4122-9f98-1107501c323c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.483841411s
STEP: Saw pod success
Oct  2 23:12:58.816: INFO: Pod "var-expansion-f7afd99b-68e1-4122-9f98-1107501c323c" satisfied condition "Succeeded or Failed"
Oct  2 23:12:59.057: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod var-expansion-f7afd99b-68e1-4122-9f98-1107501c323c container dapi-container: <nil>
STEP: delete the pod
Oct  2 23:12:59.548: INFO: Waiting for pod var-expansion-f7afd99b-68e1-4122-9f98-1107501c323c to disappear
Oct  2 23:12:59.792: INFO: Pod var-expansion-f7afd99b-68e1-4122-9f98-1107501c323c no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.420 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":125,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:00.301: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 67 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-b156faea-f756-43da-aec4-534d2f267042
STEP: Creating a pod to test consume configMaps
Oct  2 23:12:58.507: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cadc52b-8ccf-4348-849d-a02df8293b1f" in namespace "configmap-315" to be "Succeeded or Failed"
Oct  2 23:12:58.752: INFO: Pod "pod-configmaps-8cadc52b-8ccf-4348-849d-a02df8293b1f": Phase="Pending", Reason="", readiness=false. Elapsed: 244.687316ms
Oct  2 23:13:00.997: INFO: Pod "pod-configmaps-8cadc52b-8ccf-4348-849d-a02df8293b1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.489707661s
STEP: Saw pod success
Oct  2 23:13:00.997: INFO: Pod "pod-configmaps-8cadc52b-8ccf-4348-849d-a02df8293b1f" satisfied condition "Succeeded or Failed"
Oct  2 23:13:01.242: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-configmaps-8cadc52b-8ccf-4348-849d-a02df8293b1f container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:13:01.759: INFO: Waiting for pod pod-configmaps-8cadc52b-8ccf-4348-849d-a02df8293b1f to disappear
Oct  2 23:13:02.005: INFO: Pod pod-configmaps-8cadc52b-8ccf-4348-849d-a02df8293b1f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.716 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":14,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":10,"skipped":95,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:03.195: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":127,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:13:01.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9185059-eb9d-43ff-9cf1-0331519c0d43" in namespace "projected-7195" to be "Succeeded or Failed"
Oct  2 23:13:02.097: INFO: Pod "downwardapi-volume-c9185059-eb9d-43ff-9cf1-0331519c0d43": Phase="Pending", Reason="", readiness=false. Elapsed: 242.332206ms
Oct  2 23:13:04.340: INFO: Pod "downwardapi-volume-c9185059-eb9d-43ff-9cf1-0331519c0d43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.485065161s
STEP: Saw pod success
Oct  2 23:13:04.340: INFO: Pod "downwardapi-volume-c9185059-eb9d-43ff-9cf1-0331519c0d43" satisfied condition "Succeeded or Failed"
Oct  2 23:13:04.582: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod downwardapi-volume-c9185059-eb9d-43ff-9cf1-0331519c0d43 container client-container: <nil>
STEP: delete the pod
Oct  2 23:13:05.075: INFO: Waiting for pod downwardapi-volume-c9185059-eb9d-43ff-9cf1-0331519c0d43 to disappear
Oct  2 23:13:05.317: INFO: Pod downwardapi-volume-c9185059-eb9d-43ff-9cf1-0331519c0d43 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.435 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":138,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:54.525 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 133 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":2,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:10.966: INFO: Driver local doesn't support ext4 -- skipping
... skipping 109 lines ...
Oct  2 23:10:07.130: INFO: PersistentVolumeClaim csi-hostpathlkxvf found but phase is Pending instead of Bound.
Oct  2 23:10:09.380: INFO: PersistentVolumeClaim csi-hostpathlkxvf found but phase is Pending instead of Bound.
Oct  2 23:10:11.628: INFO: PersistentVolumeClaim csi-hostpathlkxvf found but phase is Pending instead of Bound.
Oct  2 23:10:13.873: INFO: PersistentVolumeClaim csi-hostpathlkxvf found and phase=Bound (18.223433789s)
STEP: Creating pod pod-subpath-test-dynamicpv-9hvr
STEP: Creating a pod to test subpath
Oct  2 23:10:14.610: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9hvr" in namespace "provisioning-9994" to be "Succeeded or Failed"
Oct  2 23:10:14.855: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 245.664345ms
Oct  2 23:10:17.103: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493051612s
Oct  2 23:10:19.348: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.73851278s
Oct  2 23:10:21.593: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.983058628s
Oct  2 23:10:23.843: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.233588035s
Oct  2 23:10:26.089: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.479605024s
STEP: Saw pod success
Oct  2 23:10:26.089: INFO: Pod "pod-subpath-test-dynamicpv-9hvr" satisfied condition "Succeeded or Failed"
Oct  2 23:10:26.334: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-9hvr container test-container-subpath-dynamicpv-9hvr: <nil>
STEP: delete the pod
Oct  2 23:10:26.838: INFO: Waiting for pod pod-subpath-test-dynamicpv-9hvr to disappear
Oct  2 23:10:27.084: INFO: Pod pod-subpath-test-dynamicpv-9hvr no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9hvr
Oct  2 23:10:27.084: INFO: Deleting pod "pod-subpath-test-dynamicpv-9hvr" in namespace "provisioning-9994"
STEP: Creating pod pod-subpath-test-dynamicpv-9hvr
STEP: Creating a pod to test subpath
Oct  2 23:10:27.576: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9hvr" in namespace "provisioning-9994" to be "Succeeded or Failed"
Oct  2 23:10:27.820: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 244.294955ms
Oct  2 23:10:30.066: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489746893s
Oct  2 23:10:32.311: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.734856571s
Oct  2 23:10:34.556: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.9799989s
Oct  2 23:10:36.801: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.225017669s
Oct  2 23:10:39.047: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 11.470590496s
Oct  2 23:10:41.292: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 13.716025416s
Oct  2 23:10:43.537: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Pending", Reason="", readiness=false. Elapsed: 15.960883935s
Oct  2 23:10:45.799: INFO: Pod "pod-subpath-test-dynamicpv-9hvr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.223464332s
STEP: Saw pod success
Oct  2 23:10:45.800: INFO: Pod "pod-subpath-test-dynamicpv-9hvr" satisfied condition "Succeeded or Failed"
Oct  2 23:10:46.044: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-9hvr container test-container-subpath-dynamicpv-9hvr: <nil>
STEP: delete the pod
Oct  2 23:10:46.542: INFO: Waiting for pod pod-subpath-test-dynamicpv-9hvr to disappear
Oct  2 23:10:46.786: INFO: Pod pod-subpath-test-dynamicpv-9hvr no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9hvr
Oct  2 23:10:46.786: INFO: Deleting pod "pod-subpath-test-dynamicpv-9hvr" in namespace "provisioning-9994"
... skipping 62 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:11.004: INFO: Only supported for providers [openstack] (not aws)
... skipping 222 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:11.249: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 71 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 29 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Oct  2 23:13:04.429: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  2 23:13:04.429: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-5hjg
STEP: Creating a pod to test subpath
Oct  2 23:13:04.674: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-5hjg" in namespace "provisioning-586" to be "Succeeded or Failed"
Oct  2 23:13:04.915: INFO: Pod "pod-subpath-test-inlinevolume-5hjg": Phase="Pending", Reason="", readiness=false. Elapsed: 240.938536ms
Oct  2 23:13:07.156: INFO: Pod "pod-subpath-test-inlinevolume-5hjg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482303201s
Oct  2 23:13:09.398: INFO: Pod "pod-subpath-test-inlinevolume-5hjg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.723513687s
STEP: Saw pod success
Oct  2 23:13:09.398: INFO: Pod "pod-subpath-test-inlinevolume-5hjg" satisfied condition "Succeeded or Failed"
Oct  2 23:13:09.639: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-5hjg container test-container-subpath-inlinevolume-5hjg: <nil>
STEP: delete the pod
Oct  2 23:13:10.139: INFO: Waiting for pod pod-subpath-test-inlinevolume-5hjg to disappear
Oct  2 23:13:10.379: INFO: Pod pod-subpath-test-inlinevolume-5hjg no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-5hjg
Oct  2 23:13:10.379: INFO: Deleting pod "pod-subpath-test-inlinevolume-5hjg" in namespace "provisioning-586"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":101,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] IngressClass API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:5.810 seconds]
[sig-network] IngressClass API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
   should support creating IngressClass API operations [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":17,"skipped":139,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:11.681: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 126 lines ...
STEP: Deleting pod hostexec-ip-172-20-33-208.ap-south-1.compute.internal-d86p7 in namespace volumemode-8606
Oct  2 23:13:03.686: INFO: Deleting pod "pod-78cd1e11-f6db-4600-a318-72f3b90ac7a9" in namespace "volumemode-8606"
Oct  2 23:13:03.938: INFO: Wait up to 5m0s for pod "pod-78cd1e11-f6db-4600-a318-72f3b90ac7a9" to be fully deleted
STEP: Deleting pv and pvc
Oct  2 23:13:06.423: INFO: Deleting PersistentVolumeClaim "pvc-5gr8f"
Oct  2 23:13:06.667: INFO: Deleting PersistentVolume "aws-c2r78"
Oct  2 23:13:07.230: INFO: Couldn't delete PD "aws://ap-south-1a/vol-066a55da62391ee98", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-066a55da62391ee98 is currently attached to i-049e8578446ca957f
	status code: 400, request id: 577cdb65-7738-4802-adac-0725d4e68b2a
Oct  2 23:13:13.353: INFO: Successfully deleted PD "aws://ap-south-1a/vol-066a55da62391ee98".
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:13:13.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-8606" for this suite.
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:13:13.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5354" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":15,"skipped":109,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":12,"skipped":102,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:13.873: INFO: Only supported for providers [azure] (not aws)
... skipping 117 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:13:13.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8aa4bab-ffa0-4ed6-9b95-84734195a4d3" in namespace "projected-6339" to be "Succeeded or Failed"
Oct  2 23:13:13.410: INFO: Pod "downwardapi-volume-c8aa4bab-ffa0-4ed6-9b95-84734195a4d3": Phase="Pending", Reason="", readiness=false. Elapsed: 241.135896ms
Oct  2 23:13:15.654: INFO: Pod "downwardapi-volume-c8aa4bab-ffa0-4ed6-9b95-84734195a4d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.484392012s
STEP: Saw pod success
Oct  2 23:13:15.654: INFO: Pod "downwardapi-volume-c8aa4bab-ffa0-4ed6-9b95-84734195a4d3" satisfied condition "Succeeded or Failed"
Oct  2 23:13:15.896: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod downwardapi-volume-c8aa4bab-ffa0-4ed6-9b95-84734195a4d3 container client-container: <nil>
STEP: delete the pod
Oct  2 23:13:16.397: INFO: Waiting for pod downwardapi-volume-c8aa4bab-ffa0-4ed6-9b95-84734195a4d3 to disappear
Oct  2 23:13:16.638: INFO: Pod downwardapi-volume-c8aa4bab-ffa0-4ed6-9b95-84734195a4d3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.407 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":144,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:17.187: INFO: Only supported for providers [gce gke] (not aws)
... skipping 53 lines ...
Oct  2 23:12:37.866: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct  2 23:12:39.144: INFO: Successfully created a new PD: "aws://ap-south-1a/vol-0ff19187df36d1d9a".
Oct  2 23:12:39.144: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-xzft
STEP: Creating a pod to test exec-volume-test
Oct  2 23:12:39.387: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-xzft" in namespace "volume-8934" to be "Succeeded or Failed"
Oct  2 23:12:39.624: INFO: Pod "exec-volume-test-inlinevolume-xzft": Phase="Pending", Reason="", readiness=false. Elapsed: 237.252995ms
Oct  2 23:12:41.864: INFO: Pod "exec-volume-test-inlinevolume-xzft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477366511s
Oct  2 23:12:44.103: INFO: Pod "exec-volume-test-inlinevolume-xzft": Phase="Pending", Reason="", readiness=false. Elapsed: 4.716441995s
Oct  2 23:12:46.341: INFO: Pod "exec-volume-test-inlinevolume-xzft": Phase="Pending", Reason="", readiness=false. Elapsed: 6.95473883s
Oct  2 23:12:48.580: INFO: Pod "exec-volume-test-inlinevolume-xzft": Phase="Pending", Reason="", readiness=false. Elapsed: 9.193742245s
Oct  2 23:12:50.820: INFO: Pod "exec-volume-test-inlinevolume-xzft": Phase="Pending", Reason="", readiness=false. Elapsed: 11.432808959s
Oct  2 23:12:53.064: INFO: Pod "exec-volume-test-inlinevolume-xzft": Phase="Pending", Reason="", readiness=false. Elapsed: 13.677684445s
Oct  2 23:12:55.302: INFO: Pod "exec-volume-test-inlinevolume-xzft": Phase="Pending", Reason="", readiness=false. Elapsed: 15.91504938s
Oct  2 23:12:57.541: INFO: Pod "exec-volume-test-inlinevolume-xzft": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.154756715s
STEP: Saw pod success
Oct  2 23:12:57.542: INFO: Pod "exec-volume-test-inlinevolume-xzft" satisfied condition "Succeeded or Failed"
Oct  2 23:12:57.779: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod exec-volume-test-inlinevolume-xzft container exec-container-inlinevolume-xzft: <nil>
STEP: delete the pod
Oct  2 23:12:58.264: INFO: Waiting for pod exec-volume-test-inlinevolume-xzft to disappear
Oct  2 23:12:58.501: INFO: Pod exec-volume-test-inlinevolume-xzft no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-xzft
Oct  2 23:12:58.501: INFO: Deleting pod "exec-volume-test-inlinevolume-xzft" in namespace "volume-8934"
Oct  2 23:12:59.062: INFO: Couldn't delete PD "aws://ap-south-1a/vol-0ff19187df36d1d9a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ff19187df36d1d9a is currently attached to i-075a98111b6649d4c
	status code: 400, request id: 033a7222-5c5a-4161-90d4-4f3ba9e1bdde
Oct  2 23:13:05.177: INFO: Couldn't delete PD "aws://ap-south-1a/vol-0ff19187df36d1d9a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ff19187df36d1d9a is currently attached to i-075a98111b6649d4c
	status code: 400, request id: ed78c043-dbf8-410a-b787-8b1237153235
Oct  2 23:13:11.523: INFO: Couldn't delete PD "aws://ap-south-1a/vol-0ff19187df36d1d9a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ff19187df36d1d9a is currently attached to i-075a98111b6649d4c
	status code: 400, request id: 066bbdcc-45b2-4a4b-ab31-470befeba222
Oct  2 23:13:17.925: INFO: Successfully deleted PD "aws://ap-south-1a/vol-0ff19187df36d1d9a".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:13:17.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8934" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":64,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 107 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should modify fsGroup if fsGroupPolicy=File
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":10,"skipped":106,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":30,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:21.102 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":13,"skipped":112,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:35.054: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 172 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:38.670: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
Oct  2 23:13:19.680: INFO: PersistentVolumeClaim pvc-bsp9z found but phase is Pending instead of Bound.
Oct  2 23:13:21.926: INFO: PersistentVolumeClaim pvc-bsp9z found and phase=Bound (15.965091485s)
Oct  2 23:13:21.926: INFO: Waiting up to 3m0s for PersistentVolume local-pqtd4 to have phase Bound
Oct  2 23:13:22.172: INFO: PersistentVolume local-pqtd4 found and phase=Bound (245.761215ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jdp7
STEP: Creating a pod to test subpath
Oct  2 23:13:22.907: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jdp7" in namespace "provisioning-1848" to be "Succeeded or Failed"
Oct  2 23:13:23.152: INFO: Pod "pod-subpath-test-preprovisionedpv-jdp7": Phase="Pending", Reason="", readiness=false. Elapsed: 244.822755ms
Oct  2 23:13:25.399: INFO: Pod "pod-subpath-test-preprovisionedpv-jdp7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491567911s
Oct  2 23:13:27.644: INFO: Pod "pod-subpath-test-preprovisionedpv-jdp7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.736796347s
STEP: Saw pod success
Oct  2 23:13:27.644: INFO: Pod "pod-subpath-test-preprovisionedpv-jdp7" satisfied condition "Succeeded or Failed"
Oct  2 23:13:27.889: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-jdp7 container test-container-volume-preprovisionedpv-jdp7: <nil>
STEP: delete the pod
Oct  2 23:13:28.387: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jdp7 to disappear
Oct  2 23:13:28.634: INFO: Pod pod-subpath-test-preprovisionedpv-jdp7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jdp7
Oct  2 23:13:28.634: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jdp7" in namespace "provisioning-1848"
... skipping 6 lines ...
Oct  2 23:13:29.615: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-driver-d3ca7667-839b-4c26-8fca-d243967e5327 && umount /tmp/local-driver-d3ca7667-839b-4c26-8fca-d243967e5327-backend && rm -r /tmp/local-driver-d3ca7667-839b-4c26-8fca-d243967e5327-backend] Namespace:provisioning-1848 PodName:hostexec-ip-172-20-54-138.ap-south-1.compute.internal-sg9rq ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  2 23:13:29.615: INFO: >>> kubeConfig: /root/.kube/config
Oct  2 23:13:31.111: INFO: exec ip-172-20-54-138.ap-south-1.compute.internal: command:   rm /tmp/local-driver-d3ca7667-839b-4c26-8fca-d243967e5327 && umount /tmp/local-driver-d3ca7667-839b-4c26-8fca-d243967e5327-backend && rm -r /tmp/local-driver-d3ca7667-839b-4c26-8fca-d243967e5327-backend
Oct  2 23:13:31.111: INFO: exec ip-172-20-54-138.ap-south-1.compute.internal: stdout:    ""
Oct  2 23:13:31.111: INFO: exec ip-172-20-54-138.ap-south-1.compute.internal: stderr:    "rm: cannot remove '/tmp/local-driver-d3ca7667-839b-4c26-8fca-d243967e5327-backend': Device or resource busy\n"
Oct  2 23:13:31.111: INFO: exec ip-172-20-54-138.ap-south-1.compute.internal: exit code: 0
Oct  2 23:13:31.111: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "command terminated with exit code 1",
        },
        Code: 1,
    }
... skipping 262 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Oct  2 23:13:31.111: Unexpected error:
          <exec.CodeExitError>: {
              Err: {
                  s: "command terminated with exit code 1",
              },
              Code: 1,
          }
          command terminated with exit code 1
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:271
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":57,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:40.603: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:13:41.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6966" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":10,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:41.643: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:13:42.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6581" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":6,"skipped":64,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path"]}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":15,"skipped":28,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:42.911: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":18,"skipped":131,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:52.977: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 75 lines ...
• [SLOW TEST:71.327 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":16,"skipped":184,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":14,"skipped":117,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:13:54.450: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 128 lines ...
Oct  2 23:13:15.211: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-84967zc2b
STEP: creating a claim
Oct  2 23:13:15.454: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-ck7g
STEP: Creating a pod to test subpath
Oct  2 23:13:16.191: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ck7g" in namespace "provisioning-8496" to be "Succeeded or Failed"
Oct  2 23:13:16.433: INFO: Pod "pod-subpath-test-dynamicpv-ck7g": Phase="Pending", Reason="", readiness=false. Elapsed: 242.777835ms
Oct  2 23:13:18.676: INFO: Pod "pod-subpath-test-dynamicpv-ck7g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485871571s
Oct  2 23:13:20.919: INFO: Pod "pod-subpath-test-dynamicpv-ck7g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.728294317s
Oct  2 23:13:23.162: INFO: Pod "pod-subpath-test-dynamicpv-ck7g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.971766143s
Oct  2 23:13:25.404: INFO: Pod "pod-subpath-test-dynamicpv-ck7g": Phase="Pending", Reason="", readiness=false. Elapsed: 9.213737109s
Oct  2 23:13:27.648: INFO: Pod "pod-subpath-test-dynamicpv-ck7g": Phase="Pending", Reason="", readiness=false. Elapsed: 11.456993545s
... skipping 40511 lines ...






3       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:01:56.260158       1 trace.go:205] Trace[280489977]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:46.258) (total time: 10001ms):\nTrace[280489977]: [10.001418766s] [10.001418766s] END\nE1002 23:01:56.260180       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.119632       1 trace.go:205] Trace[839814136]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:57.036) (total time: 18082ms):\nTrace[839814136]: [18.082697775s] [18.082697775s] END\nE1002 23:02:15.119656       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.119786       1 trace.go:205] Trace[1285931614]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:55.289) (total time: 19830ms):\nTrace[1285931614]: [19.830566018s] [19.830566018s] END\nE1002 23:02:15.119796       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.119944       1 trace.go:205] Trace[266863344]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:55.602) (total time: 19517ms):\nTrace[266863344]: [19.517457356s] [19.517457356s] END\nE1002 23:02:15.119953       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.120058       1 trace.go:205] Trace[91147211]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:55.748) (total time: 19371ms):\nTrace[91147211]: [19.37178092s] [19.37178092s] END\nE1002 23:02:15.120066       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.120180       1 trace.go:205] Trace[940047268]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:55.763) (total time: 19356ms):\nTrace[940047268]: [19.356188216s] [19.356188216s] END\nE1002 23:02:15.120190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.120314       1 trace.go:205] Trace[380049043]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:56.233) (total time: 18887ms):\nTrace[380049043]: [18.887011749s] [18.887011749s] END\nE1002 23:02:15.120322       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.120446       1 trace.go:205] Trace[360450865]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:56.947) (total time: 18173ms):\nTrace[360450865]: [18.173157139s] [18.173157139s] END\nE1002 23:02:15.120454       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.125950       1 trace.go:205] Trace[1086764425]: \"Reflector ListAndWatch\" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205 (02-Oct-2021 23:01:55.956) (total time: 19169ms):\nTrace[1086764425]: [19.169824477s] [19.169824477s] END\nE1002 23:02:15.125968       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.126072       1 trace.go:205] Trace[1630779326]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:55.324) (total time: 19801ms):\nTrace[1630779326]: [19.801947354s] [19.801947354s] END\nE1002 23:02:15.126080       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.126169       1 trace.go:205] Trace[2138190593]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:02:00.885) (total time: 14240ms):\nTrace[2138190593]: [14.240274647s] [14.240274647s] END\nE1002 23:02:15.126177       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.126265       1 trace.go:205] Trace[1807292679]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:59.920) (total time: 15205ms):\nTrace[1807292679]: [15.205562221s] [15.205562221s] END\nE1002 23:02:15.126272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.126369       1 trace.go:205] Trace[276504356]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:55.051) (total time: 20074ms):\nTrace[276504356]: [20.074822662s] [20.074822662s] END\nE1002 23:02:15.126377       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.126681       1 trace.go:205] Trace[690076792]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:55.439) (total time: 19686ms):\nTrace[690076792]: [19.686918627s] [19.686918627s] END\nE1002 23:02:15.126697       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.126813       1 trace.go:205] Trace[1164547499]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:56.215) (total time: 18910ms):\nTrace[1164547499]: [18.910904541s] [18.910904541s] END\nE1002 23:02:15.126822       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI1002 23:02:15.126915       1 trace.go:205] Trace[140166070]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (02-Oct-2021 23:01:55.866) (total time: 19260ms):\nTrace[140166070]: [19.260204344s] [19.260204344s] END\nE1002 23:02:15.126922       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nE1002 23:02:18.390023       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nI1002 23:02:20.908045       1 node_tree.go:65] Added node \"ip-172-20-45-140.ap-south-1.compute.internal\" in group \"ap-south-1:\\x00:ap-south-1a\" to NodeTree\nI1002 23:02:23.961609       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...\nI1002 23:02:23.983741       1 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler\nI1002 23:02:25.759148       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nI1002 23:02:25.759565       1 tlsconfig.go:178] \"Loaded client CA\" index=0 certName=\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\" certDetail=\"\\\"kubernetes-ca\\\" [] issuer=\\\"<self>\\\" (2021-09-30 22:58:23 +0000 UTC to 2031-09-30 22:58:23 +0000 UTC (now=2021-10-02 23:02:25.759533224 +0000 UTC))\"\nI1002 23:02:25.759805       1 tlsconfig.go:200] \"Loaded serving cert\" certName=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\" certDetail=\"\\\"kube-scheduler\\\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\\\"kubernetes-ca\\\" (2021-09-30 22:59:53 +0000 UTC to 2023-01-25 05:59:53 +0000 UTC (now=2021-10-02 23:02:25.759781455 +0000 UTC))\"\nI1002 23:02:25.760045       1 named_certificates.go:53] \"Loaded SNI cert\" index=0 certName=\"self-signed loopback\" certDetail=\"\\\"apiserver-loopback-client@1633215702\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1633215701\\\" (2021-10-02 22:01:41 +0000 UTC to 2022-10-02 22:01:41 +0000 UTC (now=2021-10-02 23:02:25.760019855 +0000 UTC))\"\nI1002 23:03:05.335233       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-wqxwk\" node=\"ip-172-20-45-140.ap-south-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI1002 23:03:05.345661       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-h84lk\" node=\"ip-172-20-45-140.ap-south-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI1002 23:03:05.574109       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-jm7kb\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI1002 23:03:05.611060       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5dc785954d-882wl\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI1002 23:03:05.642125       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-848dc45d58-xq976\" node=\"ip-172-20-45-140.ap-south-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI1002 23:03:05.656764       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-controller-545495d878-xp5dq\" node=\"ip-172-20-45-140.ap-south-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI1002 23:03:57.596190       1 node_tree.go:65] Added node \"ip-172-20-54-138.ap-south-1.compute.internal\" in group \"ap-south-1:\\x00:ap-south-1a\" to NodeTree\nI1002 23:03:57.647129       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-p84zh\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=2 feasibleNodes=1\nI1002 23:03:57.813862       1 node_tree.go:65] Added node \"ip-172-20-40-74.ap-south-1.compute.internal\" in group \"ap-south-1:\\x00:ap-south-1a\" to NodeTree\nI1002 23:03:57.843006       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-g6ttp\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=3 feasibleNodes=1\nI1002 23:03:57.946681       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-jm7kb\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=3 feasibleNodes=1\nI1002 23:03:57.946768       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/coredns-5dc785954d-882wl\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=3 feasibleNodes=1\nI1002 23:04:00.606804       1 node_tree.go:65] Added node \"ip-172-20-34-88.ap-south-1.compute.internal\" in group \"ap-south-1:\\x00:ap-south-1a\" to NodeTree\nI1002 23:04:00.636482       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-7gvbx\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=4 feasibleNodes=1\nI1002 23:04:00.870093       1 node_tree.go:65] Added node \"ip-172-20-33-208.ap-south-1.compute.internal\" in group \"ap-south-1:\\x00:ap-south-1a\" to NodeTree\nI1002 23:04:00.890940       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-6227k\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:04:06.747943       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kube-system/coredns-5dc785954d-g9rdd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI1002 23:07:29.895238       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-5033/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:30.003827       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"hostpath-1004/pod-host-path-test\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.050261       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-2158/test-webserver-be60cc95-ebc7-4eb9-8f76-ac572382b526\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.080900       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-runtime-363/termination-message-containeradeed3dd-267d-4305-bb11-eab5250a653f\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.132195       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-5033/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:30.276155       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/service-headless-fwdvj\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.290117       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/service-headless-cmc6x\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.290391       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/service-headless-jsg8q\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.373320       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-5033/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:30.387799       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-2483/nfs-server\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.485101       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1988/pod-subpath-test-inlinevolume-n6bt\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:30.598346       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-9027/simpletest.rc-fwfqx\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.607531       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-9027/simpletest.rc-6nfb8\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.634164       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-5033/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:30.693758       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-9606/test-rollover-controller-2nhp9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:30.790740       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8328/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-88mmv\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:30.820330       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-6644/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-9mpgs\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:31.020389       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-2271/test-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:31.353840       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-7161/startup-e551171b-c6a8-45ae-b7eb-900a8ae3d549\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:31.409881       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"aggregator-5224/sample-apiserver-deployment-64f6b9dc99-ghtcs\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:31.696040       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-3954/pod-configmaps-6b7442e9-6e96-4680-bd85-605df1b9ec33\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:32.132831       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-542/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-4v6nq\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:32.195910       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4732/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-x59s2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:32.441687       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-8221/sample-webhook-deployment-78988fc6cd-n7b2v\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:32.871105       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-6694/httpd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:32.980326       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5161/agnhost-primary-2jr54\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:33.626090       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-5585/pod-ephm-test-projected-vwdd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:33.715395       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8138/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-6rhss\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:34.443523       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5161/agnhost-primary-v592x\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:36.809162       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2734/inline-volume-bcv9x\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-bcv9x-my-volume\\\" not found.\"\nI1002 23:07:36.836506       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-3185/pod-exec-websocket-0188dddf-53ea-4c87-b8ae-b6f0ff8b77c4\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:38.547436       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-9655/pod-85c369f5-3f5e-4b5a-9416-a055cb08055f\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:40.237688       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/service-headless-toggled-xbvm2\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:40.246142       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/service-headless-toggled-nxb8d\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:40.251875       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/service-headless-toggled-rhjf9\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:40.525297       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-618/pod-subpath-test-inlinevolume-wcf6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:41.520229       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6507-7724/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:43.140820       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3255/simpletest.deployment-9858f564d-qsp7z\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:43.148945       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3255/simpletest.deployment-9858f564d-224mr\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:43.959035       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/verify-service-up-host-exec-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:45.269941       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-3091/pod-62aa1956-5edd-4e80-b4b5-6a9ac4ccea16\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:45.505799       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-9528/sample-webhook-deployment-78988fc6cd-b6svl\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:46.381105       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-test-4305/busybox-user-65534-1d4197e1-d0c7-44c5-9e91-8c1e21a86006\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:46.447123       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8328/pod-4a743711-83cd-4e61-aef7-2bfe762fd009\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:46.666893       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/verify-service-up-exec-pod-688hz\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:47.909788       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-9606/test-rollover-deployment-78bc8b888c-l5nt8\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:48.934865       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-2734-363/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:49.614854       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2734/inline-volume-tester-5954q\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-5954q-my-volume-0\\\" not found.\"\nI1002 23:07:49.869640       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-9606/test-rollover-deployment-98c5f4599-k65wq\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:51.274421       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-6644/pod-fd96e426-5d66-4993-816c-348c18060e1d\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:51.496796       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5519/frontend-685fc574d5-b2psv\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:51.524966       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5519/frontend-685fc574d5-ng65z\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:51.525086       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5519/frontend-685fc574d5-8z2xt\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:51.670868       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-8221/to-be-attached-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:52.009363       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-2271/test-host-network-pod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:52.029795       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-542/exec-volume-test-preprovisionedpv-459q\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:52.093716       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-99/metadata-volume-0450555f-b9fc-4707-8614-f312d309c30c\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:52.491228       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-5014/nfs-server\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:52.852693       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8328/pod-8d2a8b18-4215-4b6e-b5a3-e23ff7f2b76c\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:53.082304       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-5033/test-container-pod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:53.330216       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4257/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-cm6fw\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:53.455796       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8138/pod-3e5e7247-0548-4304-abee-713d039f2e1c\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:53.570607       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5519/agnhost-primary-5db8ddd565-4772b\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:54.508047       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-6644/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-kt626\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:55.448300       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/verify-service-down-host-exec-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:55.657966       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5519/agnhost-replica-6bcf79b489-bc8gx\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:55.680663       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5519/agnhost-replica-6bcf79b489-n4dxc\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:55.711665       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-2483/pvc-tester-p4s7s\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:58.230959       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-3138/downwardapi-volume-d35db762-e151-4670-acfa-ea0d714f9032\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:58.593800       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-9587/startup-e638d598-e274-469c-b139-1cc0e00065e9\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:07:59.091043       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4478-4825/csi-mockplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:59.560028       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4478-4825/csi-mockplugin-attacher-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:07:59.952713       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-5014/pvc-tester-p4mhm\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:00.089883       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-971/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-4zg7c\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:01.454060       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-2483/pvc-tester-gvp4z\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:03.274847       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"port-forwarding-7029/pfpod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:05.273557       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/verify-service-down-host-exec-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:05.646781       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"secrets-2566/pod-configmaps-f01d46d1-3c5a-4ebf-be8b-7f73d1aa8a0c\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:06.498810       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4257/pod-subpath-test-preprovisionedpv-wwd8\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:06.739971       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4732/pod-subpath-test-preprovisionedpv-nzmf\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:07.116552       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-3155/test-rolling-update-controller-z8r9z\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:08.830580       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6485/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-vfgz5\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:10.458316       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-9302/sample-webhook-deployment-78988fc6cd-jmbvl\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:10.695491       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-8798/pod-f9ff2e40-a2dc-44e5-a3cf-4b828092c92f\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:11.424988       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svc-latency-3939/svc-latency-rc-zqg28\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:13.277806       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-2734/inline-volume-tester-5954q\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:13.794238       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-4388/downwardapi-volume-5f8a3ff0-ae69-48e2-b3e7-5394e1f64cf1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:14.560184       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6507/pod-subpath-test-dynamicpv-ppjr\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:14.922285       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8876/pod-subpath-test-inlinevolume-rwd2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:16.122126       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6485/pod-7fd153f9-889b-4ec9-9b06-2dd830001b15\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:16.792157       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1144-7205/csi-mockplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:17.284646       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1144-7205/csi-mockplugin-attacher-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:18.317791       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-264/downwardapi-volume-a64846d4-358a-4993-81e3-62ef140463fc\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:18.634098       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4478/pvc-volume-tester-6vbz8\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:19.067481       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/verify-service-up-host-exec-pod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:20.908206       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-971/pod-318cc7e8-31ed-4201-bc54-c5c4b328d4ca\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:21.789578       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/verify-service-up-exec-pod-95f58\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:21.903701       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-8798/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-t4nnv\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:22.113733       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-3155/test-rolling-update-deployment-585b757574-h5jdm\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:23.103681       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-6733/downward-api-62d3f8f7-7bf6-4ae3-b07f-472e89351603\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:23.878233       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-3609/pod-00106aac-7aed-4b1c-8d6c-efcb6ea58f8c\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:23.997506       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"topology-7271/pod-053e4fbc-e8bb-43e5-bc33-980b997eede4\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:24.142723       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-971/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-n7nkg\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:24.578321       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2734/inline-volume-tester2-ls2x9\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester2-ls2x9-my-volume-0\\\" not found.\"\nI1002 23:08:27.263728       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-2734/inline-volume-tester2-ls2x9\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:28.013270       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"secrets-8328/pod-secrets-5fce45fb-b04e-4eee-ab08-96c5759dd0a6\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:28.661079       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1513-440/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:28.865582       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-2708/pod-configmaps-9c2cf8fc-f63b-4029-abe7-028f86aae276\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:30.164831       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-2362/verify-service-down-host-exec-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:30.254254       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-runtime-6312/termination-message-containerce7374d1-5e16-45c6-bca1-24c60aeeda8c\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:31.745650       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4830-3674/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:31.951981       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3723/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-g9qws\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:37.020898       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1513/pod-subpath-test-dynamicpv-dz5l\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:37.919497       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4830/hostpath-injector\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:39.221060       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3649/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pw7vp\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:40.581136       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3678-7675/csi-mockplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:40.815700       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4897-7321/csi-hostpathplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:41.576781       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-8204/deployment-shared-map-item-removal-55649fd747-7j5vk\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:41.590763       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-8204/deployment-shared-map-item-removal-55649fd747-p9bgh\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:41.595903       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-8204/deployment-shared-map-item-removal-55649fd747-fz69l\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:42.018901       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-2455/pod-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:42.019247       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1144/pvc-volume-tester-xdqkq\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:42.196705       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-8204/deployment-shared-map-item-removal-55649fd747-85t8w\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:42.265887       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-2455/pod-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:42.488568       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-2455/pod-2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:43.591888       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6298-6866/csi-mockplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:44.071732       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6298-6866/csi-mockplugin-attacher-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:45.112720       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7907-3270/csi-mockplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:45.298074       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3678/pvc-volume-tester-lhk89\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:45.832701       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-6184/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-h8pzv\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:47.104673       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-2824/agnhost-primary-v9hdl\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:49.595230       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7907/pvc-volume-tester-lfv86\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:51.059428       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3723/pod-subpath-test-preprovisionedpv-d88s\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:51.194938       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3649/pod-subpath-test-preprovisionedpv-c64m\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:51.458672       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-9806/test-webserver-fc2329be-2ba4-4a1e-ba85-f09d1935e4ad\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:52.155374       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3037/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-zbzxn\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:52.989089       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-6298/pvc-volume-tester-w5nzd\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:08:54.356956       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumelimits-5788-2593/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:55.467801       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-1649/pod-configmaps-2f0047ac-78d5-4c89-bf4c-c547e447da31\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:08:59.134435       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4830/hostpath-client\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:08:59.432482       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pvc-protection-3277/pvc-tester-schl7\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:00.136263       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-2824/replace-27220269--1-2qjt6\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:01.832484       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3723/pod-subpath-test-preprovisionedpv-d88s\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:04.021742       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubelet-test-5807/bin-false9fc7c5b7-b9ca-4c05-8ef3-d2d364f0a1d0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:04.977738       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9076/pod-subpath-test-inlinevolume-np5n\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:06.367634       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3037/pod-subpath-test-preprovisionedpv-j7gn\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:06.666491       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-6184/local-injector\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:08.057909       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-4805/nfs-server\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:09.328598       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-3277/pvc-tester-qnkhn\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protection9s6qb\\\" is being deleted.\"\nI1002 23:09:09.485795       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4897/pod-subpath-test-dynamicpv-xpfr\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:12.293929       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replication-controller-5252/condition-test-rt48c\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:12.303775       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replication-controller-5252/condition-test-b2f49\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:14.426127       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3079/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-8qt25\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:17.183747       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8915/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-xh424\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:18.198830       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-6567/downwardapi-volume-98d0142f-61c2-4c45-b4fb-79ed9d116962\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:19.815260       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-2282/httpd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:19.978590       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-6184/local-client\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:20.622973       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-7204/test-webserver-57ffafea-3304-4337-8b72-65d9396d1542\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:22.369759       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-9857/httpd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:22.525524       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-4805/pvc-tester-sw98l\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:22.658637       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-1173/exec-volume-test-preprovisionedpv-pw5c\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:23.972518       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"conntrack-6476/pod-client\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:24.072623       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9464/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-jsn8x\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:27.156104       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"conntrack-6476/pod-server-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:31.214567       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-7685/pod-projected-configmaps-fef83052-7a1d-449a-b615-51f7558999e7\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:32.036238       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-2282/run-log-test\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:33.268752       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-4759/metadata-volume-2dc01cdf-8416-4b52-aa27-96e47060b9a8\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:33.815677       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"containers-1578/client-containers-bfe03fea-ee0b-495f-b2cd-8de6e5f2b703\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:36.444118       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8915/pod-subpath-test-preprovisionedpv-zxxb\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:36.519441       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"conntrack-6476/pod-server-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:36.711245       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3079/pod-subpath-test-preprovisionedpv-c24k\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:38.297348       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-3728/pod-submit-remove-5f0a8197-c46b-4367-abae-b92369943342\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:38.848960       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replication-controller-1446/pod-adoption\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:44.142001       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3531/pod-handle-http-request\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:47.033967       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-401/downward-api-2d457624-3430-41c7-960d-760f1049093e\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:47.086100       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3531/pod-with-poststart-http-hook\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:48.767835       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-runtime-5758/image-pull-test7758bd67-b5a2-4af5-9274-4df12204c598\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:49.198357       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-284-342/csi-mockplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:49.911095       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-2k625\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:49.911394       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-qthbt\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:49.929890       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-fr5br\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:49.938095       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-mkmpj\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:49.938245       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-qq9vv\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:49.938306       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-kw4nz\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:50.944841       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-p2wh9\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:50.978789       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-6584b976d5-kxk7x\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:50.997417       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-6584b976d5-8rs99\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:51.069391       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-6584b976d5-84lbd\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:51.096817       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4581/aws-injector\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:51.475840       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-6584b976d5-fpgps\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:51.714153       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-6584b976d5-d2bpn\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:51.971065       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-9mqvx\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:52.141214       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-3906/pod-projected-configmaps-030797c0-4e56-480d-a08c-5038e9eec45c\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:52.223722       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-d92th\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:52.467597       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-9tq9w\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:52.561923       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-6584b976d5-9w9fv\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:52.719548       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-f6h5s\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:53.392221       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-7965/fail-once-local--1-plzk6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:53.405069       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-7965/fail-once-local--1-r8fs4\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:53.563103       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-sfdw9\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:54.792494       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9994-9692/csi-hostpathplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:54.854290       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"subpath-6611/pod-subpath-test-configmap-8qk4\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:54.977871       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-6397/pod-22de3ba9-450a-4af0-a50e-711a24add189\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:55.465940       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-7965/fail-once-local--1-9v468\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:56.292603       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-z9jmt\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:56.628992       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2539/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-scg5w\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:09:57.470858       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-7965/fail-once-local--1-w9pgr\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:58.757628       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-ps9k8\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:58.777594       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-7wc56\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:58.779340       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-tqhks\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:58.779407       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-9462/dns-test-7cf7a9a5-ba89-4636-ab37-65cca7736379\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:59.203652       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-1005/ss-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:09:59.414411       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"port-forwarding-469/pfpod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:00.142452       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-2824/replace-27220270--1-knx9x\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:00.525575       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-kfljx\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:02.144406       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-cdmn4\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:02.761955       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-pkkj4\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:02.805393       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-wnqxf\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:02.815672       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-wq8d8\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:03.559632       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-m7kcz\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:03.902320       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2539/pod-8cb328b2-19b2-4c12-ae28-0bdbd85ae1f1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:05.162965       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-9nccl\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:05.168668       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-m8wkx\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:06.297036       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-mb57n\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:06.304348       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-prt9r\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:06.794809       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-p94bq\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:07.050594       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-gxdtb\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:07.300891       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-z5dcn\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:07.387437       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3141/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-5vr4x\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:07.546398       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-5tnk2\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:07.619153       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4192/pod-always-succeed6437469c-2a17-45fb-97de-50bf97074b7e\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:07.734149       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-1069/httpd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:07.796772       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-79656dddfc-84dls\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:08.057917       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-847dcfb7fb-c8cvr\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:11.051941       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-284/pvc-volume-tester-zxjbp\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:11.136593       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"containers-3036/client-containers-98d581d1-7fac-4a6f-ab3c-4ff40d7d7f67\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:11.873906       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-2303/pod-2706ee34-3be5-4c10-a8bf-13b621ef1a46\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:12.125643       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-3104-8629/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:12.399347       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2539/pod-4c8f43f5-8bc2-48e3-b220-a11548028679\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:12.807703       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4630/pod-subpath-test-inlinevolume-hwh5\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:14.483109       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9994/pod-subpath-test-dynamicpv-9hvr\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:15.990703       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-3104/pod-d3b8d917-8a06-4ea6-8b60-692bbce3a4e7\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:16.509202       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-2319/security-context-d9f9457e-2093-4b37-82d6-31783be67f14\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:16.876191       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-5104/pod-update-activedeadlineseconds-bf67c154-ef85-4ff6-b121-24def99019a7\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:18.007765       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-7f9dc79d7c-5lq4z\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:18.025149       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-7f9dc79d7c-k6bgb\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:18.057645       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-7f9dc79d7c-rvz4t\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:19.174728       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-3104/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-vptrn\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:19.710558       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-7f9dc79d7c-hdrv7\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:20.315619       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-7f9dc79d7c-xwhv4\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:21.873780       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3141/pod-subpath-test-preprovisionedpv-n2xq\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:21.904391       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-327/suspend-false-to-true--1-rf47w\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:21.929506       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-327/suspend-false-to-true--1-4jkln\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:22.888380       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-755/pod-bf581be3-0573-4ea7-a671-76626b0dbfaa\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:23.873112       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"proxy-9445/proxy-service-4r6zt-2dhcq\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:24.487132       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"init-container-6754/pod-init-d1459c4d-b672-4359-91a0-e91087f485d2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:24.856331       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7959/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-8k5q2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:25.604768       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-981/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-7sclj\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:27.446010       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9994/pod-subpath-test-dynamicpv-9hvr\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:27.642610       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6533/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:27.888658       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6533/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:28.067549       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3141/pod-subpath-test-preprovisionedpv-n2xq\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:28.143400       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6533/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:28.389510       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6533/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:29.070572       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7372/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-zwlsl\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:29.993451       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5300/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-sshb8\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:34.329921       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4581/aws-client\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:34.378821       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/service-proxy-disabled-fqjqt\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:34.379072       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/service-proxy-disabled-mlwpx\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:34.392847       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/service-proxy-disabled-4s69n\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:36.731166       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-981/pod-subpath-test-preprovisionedpv-mkgq\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:37.133725       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7372/pod-subpath-test-preprovisionedpv-4zkp\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:37.336209       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7959/pod-subpath-test-preprovisionedpv-6hhl\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:37.896695       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5300/pod-subpath-test-preprovisionedpv-6pq7\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:39.413851       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7720-1493/csi-mockplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:39.844902       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7720-1493/csi-mockplugin-attacher-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:41.225503       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4491/pod-subpath-test-inlinevolume-b877\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:41.367628       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/service-proxy-toggled-rwqnc\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:41.388058       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/service-proxy-toggled-jwmm6\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:41.412747       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/service-proxy-toggled-kz8m5\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:44.867112       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-8177/webserver-7f9dc79d7c-nhh24\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:45.102185       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/verify-service-up-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:46.734839       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-7879/httpd-deployment-8584777d8-plrhm\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:47.210774       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7720/pvc-volume-tester-ktgbk\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:47.282131       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5889/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-fcrbr\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:49.475529       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-runtime-5393/termination-message-container44e1bc8b-40fd-4c6d-b940-bc9c3267ac17\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:49.838236       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/verify-service-up-exec-pod-7pmhg\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:50.533771       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"hostpath-5834/pod-host-path-test\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:50.756524       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4161/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-lnxqd\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:51.045387       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-7298/pod-projected-configmaps-cda3d056-0390-4df3-81cb-25d70d79bf7e\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:51.310989       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-3207/nfs-server\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:52.906640       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6533/test-container-pod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:53.163120       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6533/host-test-container-pod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:55.529997       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2077/ss-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:56.295978       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-4405/pod-projected-secrets-c30a0fa4-9cec-4d1c-af76-25271fbfb6b9\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:56.332051       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9628/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-8dqct\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:10:57.279365       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-7737/pod-configmaps-8f23c919-047e-4f14-83bb-e73e0eef633d\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:10:59.250677       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"conntrack-9267/pod-client\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:00.166000       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-34/failed-jobs-history-limit-27220271--1-rhxln\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:00.699961       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3493/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-m5mjv\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:01.470655       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-4078/pod-projected-configmaps-d26e637c-d868-4874-9a16-ef28701ddae3\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:02.487112       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"conntrack-9267/pod-server-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:02.752672       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/verify-service-down-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:03.046657       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3575/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-q6r7h\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:03.561057       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"secrets-5094/pod-secrets-ee5c7a71-1859-4ea7-8dba-a3cd9b83bda3\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:04.561353       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4870/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-q5ss4\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:06.420659       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5889/local-injector\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:07.089700       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4161/pod-subpath-test-preprovisionedpv-lb2t\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:07.729450       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-3207/pvc-tester-jx6mx\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:08.123561       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9628/pod-subpath-test-preprovisionedpv-n428\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:09.764742       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"tables-7158/pod-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:10.261510       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4870/pod-68b3c25f-f410-491b-a3b0-9919fb36b8e6\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:11:10.628039       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/verify-service-down-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:11.397269       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4870/pod-68b3c25f-f410-491b-a3b0-9919fb36b8e6\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-hvgtw\\\" not found.\"\nI1002 23:11:11.943250       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"conntrack-9267/pod-server-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:12.469450       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-1678/hostpathsymlink-injector\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:13.398263       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4870/pod-68b3c25f-f410-491b-a3b0-9919fb36b8e6\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-hvgtw\\\" not found.\"\nI1002 23:11:14.616883       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1856-7760/csi-mockplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:14.684622       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-5043/all-succeed--1-dq8w9\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:14.694899       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-5043/all-succeed--1-8zrq8\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:15.077789       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1856-7760/csi-mockplugin-resizer-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:16.268195       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-5043/all-succeed--1-nw8m6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:16.281159       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-5043/all-succeed--1-7wqzv\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:17.401488       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4870/pod-68b3c25f-f410-491b-a3b0-9919fb36b8e6\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-hvgtw\\\" not found.\"\nI1002 23:11:18.456996       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/verify-service-up-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:18.625082       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-536/httpd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:19.648053       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5889/local-client\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:21.120639       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-7032/security-context-16bcd3c5-a8c6-407a-8968-8e3612ba2181\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:21.188563       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/verify-service-up-exec-pod-mv5q6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:22.150649       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3493/pod-subpath-test-preprovisionedpv-4stf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:22.253620       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3575/pod-subpath-test-preprovisionedpv-w5sp\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:22.649124       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-6286/aws-injector\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:23.024618       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8681-7709/csi-mockplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:23.272636       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8681-7709/csi-mockplugin-attacher-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:23.525761       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8681-7709/csi-mockplugin-resizer-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:23.595207       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3685/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-qf5s2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:29.032547       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"cross-namespace-pod-affinity-6329/no-cross-namespace-affinity\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:11:29.276081       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"cross-namespace-pod-affinity-6329/with-namespaces\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:11:29.354603       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2283/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:29.598180       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2283/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:29.617282       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-6192/update-demo-nautilus-775fh\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:29.634257       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-6192/update-demo-nautilus-bcvn4\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:29.840247       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5000-4545/csi-mockplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:29.856068       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2283/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:30.093040       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2283/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:30.750967       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8681/pvc-volume-tester-g8lk8\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:30.922507       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-536/run-test\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:31.759732       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"cross-namespace-pod-affinity-6329/with-namespace-selector\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:11:31.996423       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4539/verify-service-down-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:33.003027       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-2879/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-d8fsv\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:34.161260       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1856/pvc-volume-tester-kqszm\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:35.632726       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1887/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-l4hfp\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:36.182333       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3685/pod-subpath-test-preprovisionedpv-rttq\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:39.266042       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5000/pvc-volume-tester-n66b5\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:40.530864       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1856/pvc-volume-tester-7x5vj\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:41.412754       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1887/pod-b89c7d51-7e65-42c9-b34a-4cdcfbf38ae0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:44.209715       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-20/pod-c5ee4c83-3b70-4e3c-8c23-bd1435c0a952\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:44.373216       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-9559/pod-a6a87fc6-27ed-4878-8e81-ba57fa64f936\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:45.833295       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7839/pod-97562df2-27fc-497e-85ac-64d0d376bf5c\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:46.134222       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9927/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-tcvvp\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:47.582330       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-9559/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-7trf8\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:47.743930       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1887/pod-d3fd48c3-10bc-4f11-9245-441eccc36b20\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:52.513763       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2283/test-container-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:52.972668       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-6944/downwardapi-volume-7071ddf0-d210-410f-b230-df274d9b30d1\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:52.996259       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-2879/pod-subpath-test-preprovisionedpv-qt8t\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:11:55.398347       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-20/pvc-volume-tester-writer-79qmc\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:55.940573       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-2920/pod-5546ab14-2c91-48fd-be60-b9344cfad60b\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:11:58.231861       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5618/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-cmp65\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:00.127896       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-34/failed-jobs-history-limit-27220272--1-58tgw\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:00.314524       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-6286/aws-client\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:03.095574       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4165/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-mcbmk\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:03.997887       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5618/pod-28de488f-598d-463a-8887-8fda23874a3a\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:06.195783       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9927/pod-subpath-test-preprovisionedpv-gf2p\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:06.269328       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-20/pvc-volume-tester-reader-dgb9z\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:06.659354       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-8018/downwardapi-volume-8ce18997-1fe0-4f1f-9318-119ec758b8b9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:06.985372       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"mounted-volume-expand-9692/deployment-e83b31d3-493a-463e-be95-d388b250f7ea-74cd9f749d8rbfz\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:08.801225       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4165/pod-e477d973-33f9-42b7-a835-7671920e7fe1\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:10.567269       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5618/pod-897a94b0-2bac-4fcc-8321-7b5abaaa9f07\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:11.759266       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-9222/adopt-release--1-gdxvf\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:11.768133       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-9222/adopt-release--1-844qt\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:14.693018       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-5748/pod-configmaps-6c16d02d-5c7c-42ce-8a06-225fc6b3caca\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:15.270487       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4165/pod-c782daa3-2778-4663-9829-c0a142964d30\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:15.559262       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7839/pod-a25f9a91-500a-4f02-9ec9-4ef59b33a1dd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:17.379169       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-2289/busybox-0f8e6f2f-522f-4a07-a4be-9041435187e5\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:18.473965       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-9222/adopt-release--1-vls5q\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:20.309757       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-959/downwardapi-volume-839da7b2-8190-4a00-b89b-ff1000fa8826\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:20.975765       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4005/aws-injector\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:20.986408       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-3868/dns-test-1221ec34-c8d3-49aa-a158-8380c87d24af\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:21.007326       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-1784/downwardapi-volume-4e5adc77-5076-46fc-8390-8975ca8a6e3b\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:21.926918       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"mounted-volume-expand-9692/deployment-e83b31d3-493a-463e-be95-d388b250f7ea-74cd9f749ddh6wr\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:23.198162       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4969/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-x48qn\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:24.487275       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5573/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-7n4q9\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:25.560884       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6177-2270/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:28.271698       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"hostpath-7539/pod-host-path-test\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.728795       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-jxvhp\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.776768       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-rs5qd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.777022       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-rn5pp\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.777095       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-7spxl\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.780623       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-c54wx\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.780712       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-hmzv4\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.780767       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-qrvdq\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.815845       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-q7t7m\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.837839       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-vkdgk\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:28.837951       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-7877/simpletest.rc-vf8c4\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:29.549780       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6177/pod-subpath-test-dynamicpv-mk2t\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:36.133025       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-6820/exec-volume-test-inlinevolume-mj8l\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:37.002189       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5573/exec-volume-test-preprovisionedpv-2kdd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:38.010184       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4969/pod-subpath-test-preprovisionedpv-m6gz\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:38.113711       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-8533/agnhost-primary-tmk65\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:39.265755       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-8934/exec-volume-test-inlinevolume-xzft\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:39.621481       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-1678/hostpathsymlink-client\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:42.099685       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-9299/externalip-test-cg5jl\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:42.112421       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-9299/externalip-test-pjf89\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:45.581932       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-9299/execpodg9v2k\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:49.110546       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pvc-protection-4093/pvc-tester-lqrz2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:50.125454       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4005/aws-client\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:50.194289       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-6899/downwardapi-volume-680be990-f8cc-40a7-801e-a3261e9cc283\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:52.474948       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-8606/pod-78cd1e11-f6db-4600-a318-72f3b90ac7a9\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:55.687262       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-8606/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-d86p7\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:12:56.203649       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"var-expansion-1440/var-expansion-f7afd99b-68e1-4122-9f98-1107501c323c\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:58.374498       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-315/pod-configmaps-8cadc52b-8ccf-4348-849d-a02df8293b1f\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:12:59.661964       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-7646/pod-configmaps-b7a993fd-6478-49bc-9ae2-1d8f007f8d57\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:00.128122       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-2495/successful-jobs-history-limit-27220273--1-qpkk2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:00.300427       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-runtime-7345/termination-message-containera8c505cb-b81e-4927-8484-d880ec5406a7\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:01.210696       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-908/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-pv25k\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:01.316075       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1848/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-sg9rq\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:01.727969       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-7195/downwardapi-volume-c9185059-eb9d-43ff-9cf1-0331519c0d43\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:04.554181       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-586/pod-subpath-test-inlinevolume-5hjg\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:08.586229       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6548-3894/csi-hostpathplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:10.008395       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-6548/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI1002 23:13:10.629027       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-1869/pod-43fe9cbd-4c1e-4454-b9f6-e9ad8476c83b\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:11.472790       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6548/hostpath-injector\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:12.437681       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-741/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-6gzrs\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:13.042170       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-6339/downwardapi-volume-c8aa4bab-ffa0-4ed6-9b95-84734195a4d3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:17.397156       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-1951/pod-c1dbe85e-adc0-48fa-829a-c8e1a852d93f\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:18.116392       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-741/pod-6d101c82-2571-4a6e-be8b-370ddf5a1be0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:19.042694       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-493/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:19.291012       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-493/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:19.523026       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-493/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:19.740777       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-8528/nfs-server\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:19.763888       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-493/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:20.072781       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8496/pod-subpath-test-dynamicpv-ck7g\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:20.236256       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-6721/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:13:21.895103       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"mount-propagation-8767/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-5k68x\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:22.397593       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4905/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-lzwf2\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:22.622726       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-908/pod-516aa46a-a7da-4bd6-8c1d-eabff215bb45\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:22.778653       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1848/pod-subpath-test-preprovisionedpv-jdp7\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:25.630376       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-1869/pod-52413bfd-31ed-4358-8127-09543b507cb6\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:25.835941       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-908/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-p579l\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:27.450215       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-6721/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:13:29.318049       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"secrets-6873/pod-secrets-2745e30b-d186-43f0-a590-649a99e7d6b5\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:35.186567       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-8528/pvc-tester-6p92b\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:37.714652       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-2333/sample-webhook-deployment-78988fc6cd-cx2kt\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:38.415035       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"default/recycler-for-nfs-hnsww\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:39.976275       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8173/pod-subpath-test-inlinevolume-8vch\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:44.518521       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3103/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-9977b\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:44.532365       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7193/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-ft4dj\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:44.672601       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-493/test-container-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:44.916629       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-493/host-test-container-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:46.841923       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-1951/pod-e4d8d2b7-5656-46d1-98ac-f50176568a64\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:47.765575       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5646/exec-volume-test-dynamicpv-w5nc\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:51.624681       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6548/hostpath-client\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:52.566025       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3103/pod-subpath-test-preprovisionedpv-s92r\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:52.594070       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7193/pod-subpath-test-preprovisionedpv-lk9l\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:13:53.083759       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-8528/pvc-tester-52cdl\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:53.962178       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"var-expansion-2311/var-expansion-f7ea6687-d18a-49bf-89b7-2c0990dfb607\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:54.832383       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7554/pod-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:55.069362       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7554/pod-1\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:55.308977       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7554/pod-2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:56.259032       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"subpath-6769/pod-subpath-test-projected-j988\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:56.647925       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5531/emptydir-injector\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:13:58.356341       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-8844/startup-1a5b2fcf-2f0f-4e76-a5e5-89ea1a7cccd3\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:00.134623       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-2495/successful-jobs-history-limit-27220274--1-hhlvq\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:00.156976       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-2123/concurrent-27220274--1-h4z6f\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:03.007311       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"default/recycler-for-nfs-hnsww\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:03.618330       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-235/ss2-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:04.689824       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-879/pod-projected-configmaps-6f13e549-8d8d-4ad4-803d-e3f9bd2788d2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:04.791909       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3409-4869/csi-mockplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:04.886054       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-2948/pod-projected-secrets-3dd88387-9e8d-4dbd-a75a-5c71155fe71e\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:05.266091       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3409-4869/csi-mockplugin-attacher-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:11.921453       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-235/ss2-1\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:12.895911       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7188-4837/csi-mockplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:13.230593       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-235/ss2-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:13.353975       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7188-4837/csi-mockplugin-attacher-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:14.966683       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3409/pvc-volume-tester-84lx8\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:15.355025       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4509/pod-logs-websocket-744ffe37-03b8-49e7-92f3-d6b8ec59313a\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:17.363951       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-4825/security-context-1cc247eb-f871-4c2c-8786-962ec5f072d7\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:17.465004       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7188/pvc-volume-tester-l2p8j\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:22.104681       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"port-forwarding-8777/pfpod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:23.592459       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-3649/downward-api-1729d871-8bd8-4b23-9f1c-d6ed59a41184\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:23.656886       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9926/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-gsppk\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:24.038324       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5903/aws-injector\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:24.322032       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"secrets-1328/pod-secrets-1f36d142-a49c-4760-913c-e837f26c3f2a\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:25.228010       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4177/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-ff2f4\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:25.433488       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-140/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-jn8zt\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:26.657891       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-7490/httpd\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:29.485890       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9926/pod-1b009d79-5098-4f58-b4fd-7129d35b6805\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:30.806718       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8811/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:31.049615       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8811/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:31.214183       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-140/pod-e80cb519-1bb0-4f8c-bef2-4ab82035a69b\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:31.294117       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8811/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:31.539377       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8811/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:31.877150       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-2459/pod-sharedvolume-3a817df1-d29a-4cb8-818a-50d81245098d\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:32.615533       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9171/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-jnmlt\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:36.143433       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9926/pod-575a4964-1551-42df-8e41-310239d907e1\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:37.601943       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4177/pod-subpath-test-preprovisionedpv-6grt\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:37.794010       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-140/pod-94acdc59-735a-4439-8a8c-2168e6cc6b3d\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:42.855358       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5471/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-t869b\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:43.458276       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-7553-7967/csi-hostpathplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:43.932750       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-7553/inline-volume-tester-s6pwt\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:44.311237       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-7509/pod-ready\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:45.659067       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6214/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-gk2mt\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:46.991814       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-3013/test-new-deployment-847dcfb7fb-494g6\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:49.068420       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-582/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-2mknl\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:50.657196       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5471/pod-57fd8fa6-4807-4584-bbd8-7d2bfa5573b0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:51.680896       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-235/ss2-2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:53.969618       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8811/test-container-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:55.300663       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-551/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:55.544911       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-551/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:55.792324       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-551/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:56.039511       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-551/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:57.026629       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5471/pod-5e89f5ee-6a60-48c1-b824-cefac6ab6785\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:14:57.584268       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-235/ss2-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:58.546055       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-5478/pod-configmaps-955b9d4e-cb1d-400a-950e-4f7c46de6fdb\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:14:59.400649       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-235/ss2-2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:00.142770       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-2123/concurrent-27220275--1-xphxr\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:02.738362       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replicaset-244/test-rs-wzpf8\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:03.473632       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3829/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-ccfn6\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:03.950916       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-419/pod-update-02767325-4744-4c91-9170-bedbbeb52b9a\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:05.991917       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-582/pod-subpath-test-preprovisionedpv-nd8w\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:07.226042       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6214/pod-subpath-test-preprovisionedpv-w6cz\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:07.272301       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"port-forwarding-5480/pfpod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:08.889807       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-7566/pod-e34770b3-827d-4309-82b0-f9d74b777dbc\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:09.173675       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3829/pod-9756967e-ef5c-4d1d-82a2-00358046f1f8\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:09.333796       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9855-4949/csi-mockplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:09.810834       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9855-4949/csi-mockplugin-attacher-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:13.382346       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"containers-6316/client-containers-101c8248-9dea-4506-a11d-c5649ec1740b\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:14.808669       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-235/ss2-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:16.360744       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1876/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-qh4ds\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:17.282539       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9855/pvc-volume-tester-klzlw\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:18.120792       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"port-forwarding-8545/pfpod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:18.600530       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-551/test-container-pod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:19.978707       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9855/inline-volume-wnptc\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:20.031140       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-2400/test-pod-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:20.274426       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-2400/test-pod-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:20.518653       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-2400/test-pod-3\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:20.569815       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-420/test-recreate-deployment-6cb8b65c46-4s6wh\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:20.927434       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5840-1789/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:21.225639       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-4731/dns-test-a0cbc960-da6d-4d7b-ba4f-354e070faa05\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:21.785743       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8675/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:21.978115       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-280/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-6nb7k\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:22.038220       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8675/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:22.287222       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8675/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:22.358346       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-5840/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI1002 23:15:22.538425       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8675/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:23.238862       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-936-3835/csi-hostpathplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:23.569108       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-5840/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI1002 23:15:23.784445       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-5538/deployment-585449566-57cml\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:23.797789       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-5538/deployment-585449566-qhpfh\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:23.798018       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-5538/deployment-585449566-ls2mq\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:24.043789       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-420/test-recreate-deployment-85d47dcb4-m7pw6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:24.075800       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3769/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-ptjjc\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:24.076064       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-5538/deployment-55649fd747-89qd6\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:25.569956       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-5840/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI1002 23:15:26.645614       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"conntrack-7448/boom-server\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:27.065035       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-5395/ss2-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:28.145481       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-5395/ss2-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:29.277337       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-5395/ss2-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:29.574728       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5840/hostpath-injector\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:30.252945       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"init-container-3201/pod-init-065f1c08-b01d-441c-a418-e6d139130abc\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:31.190527       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-9280/pod-ephm-test-projected-2z49\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.344798       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-ghcml\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.358874       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-pcqng\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.359115       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-nrl94\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.380296       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-mmlw7\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.388603       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-npt5f\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.393861       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-8frxf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.393927       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-2bqjz\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.406588       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-cm7cr\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.409300       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-xkk76\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:33.410747       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-508/simpletest.rc-gkphf\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:34.260896       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2171/ss2-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:35.876982       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"conntrack-7448/startup-script\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:36.262941       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3769/pod-subpath-test-preprovisionedpv-snq7\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:36.575196       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-280/pod-subpath-test-preprovisionedpv-fk4h\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:37.311192       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1876/pod-subpath-test-preprovisionedpv-k2v4\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:38.508381       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-5395/ss2-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:39.877868       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6540/pod-subpath-test-dynamicpv-95vj\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:40.081350       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2171/ss2-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:40.191788       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-3952/externalname-service-2dwqs\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:40.201664       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-3952/externalname-service-pqv7s\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:41.744795       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-5395/ss2-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:42.207547       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2171/ss2-2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:44.629517       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4996/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-pm28r\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:45.491939       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-5395/ss2-2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:46.065525       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5840/hostpath-client\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:46.725556       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-3952/execpod2fthf\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:46.994779       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8675/test-container-pod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:47.028733       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-7163/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-hkxtt\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:47.240719       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-8675/host-test-container-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:50.960625       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6540/pod-subpath-test-dynamicpv-95vj\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:51.196659       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-5395/ss2-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:53.560501       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4996/pod-9e0071fd-53dc-410a-b633-680eed48dac0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:58.586734       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-5395/ss2-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:15:58.861749       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-9650-8336/csi-hostpathplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:15:59.303701       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-5395/ss2-2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:01.888094       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-1532/sample-webhook-deployment-78988fc6cd-phxh5\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:02.858098       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-9650/pod-58232094-6003-42b9-9ba5-6a72d49e8154\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:03.341694       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubelet-test-8735/busybox-host-aliasesdb0e912a-4f4d-4980-8429-627f500db945\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:03.829965       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-9767/pod-eea4ca61-a99d-4645-a5e8-5110eb00719b\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:04.889267       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2171/ss2-2\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:05.829184       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-7397/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:16:05.961711       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/up-down-1-59gxz\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:05.998680       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/up-down-1-s8snb\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:05.998925       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/up-down-1-7j29s\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:06.185870       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-7163/local-injector\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:08.313002       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-16/downward-api-895ffae7-90ba-46f7-b9ed-29f936717adc\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:09.959847       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/up-down-2-x5kx8\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:09.990562       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/up-down-2-976tj\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:09.990813       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/up-down-2-jlw2s\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:13.745184       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-host-exec-pod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:15.381643       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-1949/httpd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:16.450768       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-5998/pod-configmaps-e9bdaa65-484e-4764-8e42-52587c0b384f\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:16.478370       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-exec-pod-nffv2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:17.742800       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2789-3607/csi-mockplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:18.235669       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-test-8716/explicit-root-uid\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:19.258676       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-7163/local-client\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:22.449556       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-5360/pod-test\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:23.027971       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4639-252/csi-mockplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:23.273866       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4639-252/csi-mockplugin-attacher-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:24.578736       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7590/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-7jf74\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:24.950001       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:27.681784       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-exec-pod-9thqt\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:27.992835       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-873/pod-projected-configmaps-fb96cccc-64d4-4376-bc79-15b0032615ab\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:28.224566       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-2936/sample-webhook-deployment-78988fc6cd-27gcd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:30.347005       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2789/pvc-volume-tester-rpq8k\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:30.473853       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4639/pvc-volume-tester-njmjh\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:34.213555       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"events-4013/send-events-cc187930-2b6e-46ef-903d-7662db68652f\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:34.226030       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-8902/busybox-39393836-8b76-47ea-b8c7-5578127e54c3\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:34.381211       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2171/ss2-1\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:38.344181       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6246/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:38.576695       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6246/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:38.815171       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6246/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:38.970986       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"var-expansion-8616/var-expansion-c965b81a-099d-481f-bbbb-1146df484b9d\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:39.051585       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6246/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:41.452930       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"secrets-6244/pod-secrets-a25a4f2e-204b-4f59-aa26-02ba7e789edc\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:44.149723       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-3504/test-ss-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:44.586742       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-9327/pod-projected-secrets-2dae50ad-7550-44e4-b83a-0fe0abb00749\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:47.347145       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1094/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-7s6tz\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:47.361012       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-1095/httpd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:50.450229       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-4882/security-context-7d72ed86-97ca-49c9-844f-d8eaf77c5ff9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:53.174503       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1094/pod-d9fbed6b-5a41-4762-9897-50a142d3220e\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:16:55.349939       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-3504/test-ss-1\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:16:56.318788       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3604/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-n5dfp\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:01.438885       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6246/test-container-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:03.041340       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4346/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-t8v8z\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:03.508153       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3604/pod-48bc1505-edb7-4d45-a2e0-7ca638bcebac\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:05.702396       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-37/security-context-31360853-68ca-4d40-88fb-c6865cb87076\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:05.993577       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-8968/sample-webhook-deployment-78988fc6cd-slxqs\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:06.573181       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-4779/dns-test-24674a66-95a6-4b53-8b1e-5a217783a87f\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:07.737449       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-694/pod-e0ec38fe-5971-4a93-aa98-506a8d7dea63\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:10.003013       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3604/pod-516f142b-ff57-4a2d-bf82-d80bece22794\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:10.522394       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6841/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:10.768250       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6841/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:11.010622       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6841/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:11.276799       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6841/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:11.935222       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4346/pod-2971f017-48de-48c9-a0fb-337999e24643\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:13.216374       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-1836/httpd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:18.443131       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3931/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-l4fsr\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:19.400682       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-5990/pod-configmaps-0a934370-67d9-43ad-a07b-54304f6f7b7c\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:25.001030       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-3203/pod-projected-secrets-22d197c8-6012-4f78-92aa-11c488e2f1ab\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:26.629393       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-7688/ss-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:27.299477       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6842/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:27.427775       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3931/pod-d2a1ab73-42ff-42f4-b2d9-7912981c2979\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:27.539193       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6842/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:27.777781       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6842/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:28.014557       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6842/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:28.081427       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-5895/pod-projected-secrets-65f4430f-6346-45c6-bc6f-3a3b2951d725\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:30.613088       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-9400/pod-configmaps-de544194-abf6-41df-b75a-1ab75cda3f8d\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:32.269859       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8761/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-p8gxz\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:33.734450       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-477/test-dns-nameservers\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:34.906979       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-867/labelsupdate248cc26c-ac96-4bc1-b202-a875b5a82384\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:35.710235       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6841/test-container-pod\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:35.937526       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3931/pod-fc9ee490-553a-4098-906c-8e474dfbf4e6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:35.951451       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4117/pod-handle-http-request\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:36.063666       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-2134/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-zg2jx\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:37.167979       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-4310/pod-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:37.406168       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-4310/pod-1\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:38.038851       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4989/aws-injector\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:38.891134       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4117/pod-with-poststart-exec-hook\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:39.726351       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-9128/annotationupdatedbeaf065-53ff-4a0e-8919-24862f489e05\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:39.921007       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"crd-webhook-7378/sample-crd-conversion-webhook-deployment-697cdbd8f4-w9pnf\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:41.500729       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-test-9935/alpine-nnp-nil-b92605c0-3103-4910-a18b-c50988c99293\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:42.779899       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-1736/update-demo-nautilus-d97ft\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:42.790548       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-1736/update-demo-nautilus-wfcfc\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:47.047381       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-3106/ss-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.439093       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-8w4km\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.445074       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-trsqn\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.458780       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-qmnwf\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.460923       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-jbwfv\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.472787       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-422xm\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.477235       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-c8v9b\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.486099       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-rqb2j\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.500896       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-hlzhs\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.502093       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-5s944\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:48.504591       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3379/simpletest-rc-to-be-deleted-92cv9\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:49.059449       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-7958/logs-generator\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:50.574317       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2171/ss2-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:51.102350       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-down-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:51.630638       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8761/pod-subpath-test-preprovisionedpv-gqc8\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:51.671317       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-7688/ss-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:51.724177       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-1394/liveness-ee35fbaf-c063-45db-8ca0-d958b4652f09\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:52.033352       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-1330/pod-projected-secrets-71054308-ce93-47a2-bbe7-11dc7f576e8f\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:52.438858       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6842/test-container-pod\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:52.667425       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-6842/host-test-container-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:53.126779       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-2134/local-injector\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:54.561210       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-8129-8994/csi-hostpathplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:17:55.113622       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-513/pod-projected-configmaps-692bada3-f591-463c-a652-857f605cd91a\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:59.209309       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-694/pod-05eeedae-9e99-4c83-bed5-c54edf8c59b7\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:17:59.843852       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-109/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-rqrsc\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:02.357557       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4989/aws-client\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:02.724828       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:02.862944       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4995/pod-handle-http-request\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:05.201089       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-3106/ss-1\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:05.432553       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-1736/update-demo-nautilus-xp84m\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:06.574699       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-2134/local-client\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:07.793258       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-109/exec-volume-test-preprovisionedpv-4p5j\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:07.813724       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"port-forwarding-2833/pfpod\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:09.053803       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9671/pod-subpath-test-dynamicpv-8f9z\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:09.193721       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-9787-8467/csi-hostpathplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:11.464706       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-exec-pod-2sph7\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:11.839340       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4995/pod-with-prestop-exec-hook\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:11.958162       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-8129/pod-4af5b41e-ffd6-40c4-b246-3a63e3331485\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:13.073714       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-9787/pod-c0c786ce-eb77-49df-8eb5-ccfa6df469cb\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:15.435134       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"subpath-1854/pod-subpath-test-secret-r9wn\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:16.250127       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-9787/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-t69tm\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:20.204751       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/up-down-3-92sg9\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:20.213213       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/up-down-3-pjzhf\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:20.216017       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/up-down-3-kp6sh\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:20.657551       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-2773/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-qc5f9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:21.608989       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-runtime-8825/image-pull-test4d59142a-9453-46fc-8711-04dd805816c6\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:23.754518       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-7688/ss-2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:23.970793       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-8231/pod-projected-configmaps-0cacf436-bc96-4430-84a9-c040fb69fc43\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:25.452786       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2171/ss2-2\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:26.945730       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:27.050295       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-3106/ss-2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:28.574869       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7623/pod-subpath-test-inlinevolume-tcm6\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:30.290079       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2171/ss2-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:31.761910       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-exec-pod-wbn7r\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:31.832884       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replication-controller-1078/rc-test-22w2v\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:32.219327       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replicaset-3757/test-rs-tg5zd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:32.234798       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replicaset-3757/test-rs-754sw\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:32.249823       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replicaset-3757/test-rs-c8jcz\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:33.135429       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-2171/ss2-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:34.807000       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replication-controller-1078/rc-test-9xnct\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:35.886445       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-9719/kube-proxy-mode-detector\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:37.828719       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-2773/exec-volume-test-preprovisionedpv-q5sw\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:37.861796       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-runtime-8922/terminate-cmd-rpa5600eb53-c03a-4005-881f-ff93b1585327\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:38.657989       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1695/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-lkkd7\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:39.705578       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8586-7866/csi-mockplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:40.181012       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-host-exec-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:40.941258       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-test-1153/alpine-nnp-true-03245807-2281-44a3-a986-2bd467d2d21d\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:42.197756       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-9719/echo-sourceip\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:42.914715       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-126/verify-service-up-exec-pod-mwsk5\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:43.893631       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5097/pod-subpath-test-inlinevolume-7tcg\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:44.405691       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1695/pod-cbcbbb77-edf4-49a9-b12c-a1d34977458c\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:44.562297       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-4699/e2e-test-httpd-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:45.909495       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-9719/pause-pod-fc8f75c7-6spxn\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:45.920357       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-9719/pause-pod-fc8f75c7-dhqfz\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI1002 23:18:46.368513       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-7137/slow-terminating-unready-pod-4662v\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:48.489174       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-89/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-p6nbv\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:50.117258       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9814/inline-volume-srjg7\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-srjg7-my-volume\\\" not found.\"\nI1002 23:18:50.726590       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1695/pod-04f8d44b-8c63-422e-a50f-9c3079f26c22\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:50.849592       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9325-6809/csi-hostpathplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:51.313536       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9325/inline-volume-tester-8w2rb\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:51.593990       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-9798/hostexec\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:51.637199       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-2494/pod-configmaps-57c935c8-31c2-47da-8de1-74d715134b59\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:52.373192       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-7137/execpod-dtfbp\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:52.393439       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-5299/oidc-discovery-validator\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:52.574679       1 volume_binding.go:332] \"Failed to bind volumes for pod\" pod=\"csi-mock-volumes-8586/pvc-volume-tester-qh2sm\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-8lzrz\\\"\"\nE1002 23:18:52.575193       1 framework.go:863] \"Failed running PreBind plugin\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-8lzrz\\\"\" plugin=\"VolumeBinding\" pod=\"csi-mock-volumes-8586/pvc-volume-tester-qh2sm\"\nE1002 23:18:52.575944       1 factory.go:397] \"Error scheduling pod; retrying\" err=\"running PreBind plugin \\\"VolumeBinding\\\": binding volumes: provisioning failed for PVC \\\"pvc-8lzrz\\\"\" pod=\"csi-mock-volumes-8586/pvc-volume-tester-qh2sm\"\nI1002 23:18:53.310679       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3859/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-x9s49\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:54.776801       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8586/pvc-volume-tester-qh2sm\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:54.966880       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5364/e2e-test-httpd-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:56.291057       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9325/inline-volume-tester2-295mt\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:18:57.161534       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-runtime-8922/terminate-cmd-rpofd299c3f6-af53-4d01-b3ed-168a678c25b3\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:57.253082       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-2521/all-pods-removed--1-2tqr6\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:57.271939       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-2521/all-pods-removed--1-wvc84\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:18:58.789626       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-4705/busybox1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:00.126110       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-4089/forbid-27220279--1-b7ppl\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:02.208586       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9814-9301/csi-hostpathplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:02.901201       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9814/inline-volume-tester-vl2xd\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-vl2xd-my-volume-0\\\" not found.\"\nI1002 23:19:03.974522       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-runtime-8922/terminate-cmd-rpn0951b98d-6c03-4dc5-91fe-ff09743bf520\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:04.321361       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-5356/pod-service-account-b8dd3653-f39a-4f00-8ba4-944b5a788d8c\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:04.771638       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9814/inline-volume-tester-vl2xd\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI1002 23:19:06.344779       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7116/pod-subpath-test-dynamicpv-jdfz\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:06.778613       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9814/inline-volume-tester-vl2xd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:06.907923       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-89/pod-subpath-test-preprovisionedpv-4xht\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:08.029936       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3859/pod-subpath-test-preprovisionedpv-cd86\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:08.770654       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-6043/pod-projected-configmaps-fb94d3db-dd62-4673-8707-789d2e7e0ca0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:08.808777       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4089-3729/csi-mockplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:09.352131       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4089-3729/csi-mockplugin-attacher-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:09.588268       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-8114-281/csi-hostpathplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:09.779437       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6073/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-vhchr\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:10.406521       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-6892/downwardapi-volume-10fa9ba4-4c31-4499-8867-a437d6ba8e3c\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:12.748511       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-536/run-test-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:14.224769       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"sysctl-3406/sysctl-3aaa7d3e-0ba9-420d-93fc-d8d6409f1aac\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:16.516698       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-7172/dns-test-0306afc2-655f-41e9-8b8a-afc033cd28e1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:17.725901       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-8585/pod-service-account-defaultsa\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:17.841193       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9814/inline-volume-tester2-mmrm4\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester2-mmrm4-my-volume-0\\\" not found.\"\nI1002 23:19:17.972459       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-8585/pod-service-account-mountsa\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:18.017516       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-8114/hostpath-injector\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:18.218525       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-8585/pod-service-account-nomountsa\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:18.464688       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-8585/pod-service-account-defaultsa-mountspec\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:18.714774       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-8805/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-fppft\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:18.721796       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-8585/pod-service-account-mountsa-mountspec\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:18.961748       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-8585/pod-service-account-nomountsa-mountspec\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:19.211971       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-8585/pod-service-account-defaultsa-nomountspec\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:19.455959       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-8585/pod-service-account-mountsa-nomountspec\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:19.706301       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-8585/pod-service-account-nomountsa-nomountspec\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:19.786497       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9814/inline-volume-tester2-mmrm4\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:22.576035       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4089/pvc-volume-tester-f6cd5\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:25.277108       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-3273/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-fnvvf\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:26.451171       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-536/run-test-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:33.466937       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-8386/busybox-15217155-ca60-4de7-b744-d10c2cb16579\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:37.397713       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6073/pod-subpath-test-preprovisionedpv-wnxz\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:37.532134       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-235/ss2-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:37.572756       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-3273/pod-3cde5332-885d-4003-b780-7687629f72df\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:37.918795       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-8805/local-injector\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:38.035968       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-7836/pod-e830b529-7878-4566-b79d-7283bc7c30eb\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:40.791914       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-3273/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-gnj6k\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:43.257571       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-7836/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-nr52h\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:45.914939       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6073/pod-subpath-test-preprovisionedpv-wnxz\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:49.078979       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-3289/liveness-4a9d2a81-993a-4dd4-b713-bb5297224d05\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:49.906241       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5903/aws-client\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI1002 23:19:49.992179       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6486/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-jsh4v\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:51.210689       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-8805/local-client\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:53.222414       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-8114/hostpath-client\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:54.478854       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-6fkzh\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:54.495316       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-c2hkf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:54.495590       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-vfd6t\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:54.514697       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-gtvcr\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:54.529258       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-kgnsl\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:54.529629       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-fn9rq\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:54.529693       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-tqjjq\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:54.549267       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-vzmds\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:54.565677       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-qm7jd\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:54.565756       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-82wdf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:55.669739       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6486/pod-0f078cac-11c2-4038-b2bd-dd83c9973310\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:57.457871       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-7348/pod-projected-configmaps-a698a9c3-b302-4eb6-bf8f-267e255c5200\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:19:57.701173       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-8853-1854/csi-hostpathplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:59.422824       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8044-9734/csi-mockplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:19:59.894874       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8044-9734/csi-mockplugin-attacher-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:00.123880       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-4089/forbid-27220280--1-bgljm\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:00.876993       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-5167/pause\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:01.606064       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-8853/pod-aae1963c-089f-4adf-ad7e-31e90aa9201e\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:03.441742       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-9337/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-pqlfk\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:04.077069       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-9983/pod-handle-http-request\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:04.494040       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-bnkzn\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:04.514031       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-qrn9j\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:04.518590       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-8pm6h\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:04.569797       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-hjc72\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:04.578107       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-wbszj\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:06.229264       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-9173/liveness-704465b5-b303-48e4-93f7-31cbc08c49d1\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.269679       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-k5vrn\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.279492       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-lz7bl\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.300958       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-sfsmx\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.301513       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-qbszz\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.308669       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-k9wj7\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.308836       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-l7z8z\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.341793       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-gfd7b\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.341898       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-6qd5h\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.341963       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-x8wr9\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.342019       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-qtr5w\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.342074       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8044/pvc-volume-tester-tql5k\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:07.342127       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-m5qcq\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.406128       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-zs7xs\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.420738       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-vzqct\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.421076       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-6t62p\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.421149       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-2cdtg\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.421208       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-ns6b6\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.421301       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-v99qv\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.454571       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-847dcfb7fb-6zz6l\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.454872       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-c9zwb\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:07.479700       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-2984/webserver-deployment-795d758f88-hlkv6\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:08.319958       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2400/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:08.561605       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2400/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:08.798731       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2400/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:09.041558       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2400/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:10.355413       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-6118/externalname-service-c8dbr\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:10.371302       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-6118/externalname-service-8vvqn\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:11.354091       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4088/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-jdgcv\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:15.637546       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-3258/pod-3e5b16ad-5250-49da-a2d3-0e67c73a0b84\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:16.591766       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"init-container-5495/pod-init-afab71a0-ff06-4727-8b99-7ab8b496222f\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:17.083969       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-9983/pod-with-prestop-http-hook\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:19.312259       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"secrets-1084/pod-secrets-b509f66f-fd40-405a-9d9a-21a6c6bf450c\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:21.281703       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-9337/exec-volume-test-preprovisionedpv-znqf\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:22.836824       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-6118/execpod65wfb\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:26.132691       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5286/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-8dxw9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:27.401002       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4573/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-pk7fq\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:30.520180       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5266/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-vdvn4\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:31.452688       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2400/test-container-pod\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:33.238692       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4573/pod-e57dd1ff-b31c-436f-aa5e-022cb6172799\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:20:33.403978       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5514-8055/csi-mockplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:34.849436       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4573/pod-e57dd1ff-b31c-436f-aa5e-022cb6172799\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-kgrsw\\\" not found.\"\nI1002 23:20:36.437183       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5286/pod-subpath-test-preprovisionedpv-cnf5\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:36.851008       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4573/pod-e57dd1ff-b31c-436f-aa5e-022cb6172799\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-kgrsw\\\" not found.\"\nI1002 23:20:36.949908       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4088/pod-subpath-test-preprovisionedpv-n8mf\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:38.025729       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-5804/pod-f34c3a64-4799-48c2-a94f-6232867bb429\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:38.041634       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5814/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-bdmsd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:40.853657       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4573/pod-e57dd1ff-b31c-436f-aa5e-022cb6172799\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-kgrsw\\\" not found.\"\nI1002 23:20:42.641301       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5286/pod-subpath-test-preprovisionedpv-cnf5\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:42.756550       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5514/pvc-volume-tester-rkd88\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:42.833380       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-4738/pod-4488bbe2-0a08-4b8c-8465-06e8a40b05ba\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:45.512831       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-51-7108/csi-mockplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:45.757422       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7967/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-vqz8g\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:45.785922       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-51-7108/csi-mockplugin-attacher-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:47.001790       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-9258/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:47.242766       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-9258/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:47.268996       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-3258/pod-25d1f066-c1b7-4f53-a671-07640aa22241\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:47.486053       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-9258/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:47.728143       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-9258/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:49.380626       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-7455/sample-webhook-deployment-78988fc6cd-7pzkx\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:49.535500       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-7742-7707/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:51.290769       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5266/pod-subpath-test-preprovisionedpv-bkj9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:51.393616       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5814/pod-subpath-test-preprovisionedpv-f7j9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:51.482040       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7967/pod-cfa61bc7-c61e-4014-9bec-85a37f264e54\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:51.725951       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-3572/rs-w27bd\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:51.744702       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-3572/rs-nq8q4\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:51.744831       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-3572/rs-x46c8\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:53.057603       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-51/pvc-volume-tester-jw8pk\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:58.098492       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-3572/rs-wdc74\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:59.540851       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-8565/externalsvc-qcpx9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:59.554902       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-8565/externalsvc-vjxtg\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:20:59.738634       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5266/pod-subpath-test-preprovisionedpv-bkj9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:20:59.821490       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5814/pod-subpath-test-preprovisionedpv-f7j9\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:00.128217       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"cronjob-3761/concurrent-27220281--1-gsk46\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:00.716204       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-test-645/busybox-user-0-19e3d9de-896f-402b-9796-a5c645ac60d2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:02.079409       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"init-container-7460/pod-init-347bfc80-2038-4866-9617-e93d112a16cd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:02.295399       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-8853/pod-2b377f71-2e32-47b5-a423-519cdf077c62\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:02.788118       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-3572/rs-pvxwm\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:03.935622       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-991/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-lc6km\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:06.206860       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8564/inline-volume-sddkq\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-sddkq-my-volume\\\" not found.\"\nI1002 23:21:06.758524       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-8565/execpodxbpbl\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:07.623679       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pv-9545/pod-ephm-test-projected-v42b\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:09.869306       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-9658/test-pod-fee93e21-2213-411e-aa93-2dc972653aaf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:10.148741       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-9258/test-container-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:10.388905       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pod-network-test-9258/host-test-container-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:15.144010       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-6463/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-jvgmf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:15.580628       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-1275/backofflimit--1-x8fhp\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:15.890688       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-9658/test-pod-fee93e21-2213-411e-aa93-2dc972653aaf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:17.128257       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-3980/security-context-8d5222e5-4a53-4d22-bd78-e6c885f17455\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:17.499077       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-1275/backofflimit--1-f457g\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:19.042909       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-8564-6144/csi-hostpathplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:19.355917       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubelet-test-310/bin-falsef2950646-7764-4aab-b4c0-b8d9d359c761\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:19.783955       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8564/inline-volume-tester-zmh8q\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-zmh8q-my-volume-0\\\" not found.\"\nI1002 23:21:21.044162       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-611/dns-test-b02a0a85-c806-4c79-922d-dc315fb5d78b\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:21.889843       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-8564/inline-volume-tester-zmh8q\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:21.914900       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-9658/test-pod-fee93e21-2213-411e-aa93-2dc972653aaf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:23.130672       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-991/pod-subpath-test-preprovisionedpv-d8dw\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:23.232214       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-342/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-n75kz\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:25.697582       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-9658/test-pod-fee93e21-2213-411e-aa93-2dc972653aaf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:28.042122       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"prestop-1675/pod-prestop-hook-cf98efba-17e0-4773-8fc6-7ec14b3a7e29\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:30.274792       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1736/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-44jft\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:30.584195       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-342/pod-6833c616-baac-4a1a-beb4-bdf694de7497\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:31.001349       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-9627/pod-size-memory-volume-4364bba2-0e71-4407-a402-74b8d504d11c\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:31.588831       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-6550/pod-configmaps-c66de13e-a3a5-47e0-a273-fed1a6f778cc\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:33.227675       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-611/dns-test-847253ee-0977-41b8-b3ce-0d600e587f56\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:33.983267       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-2564/pod-subpath-test-inlinevolume-ctjx\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:34.156979       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2986/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-vww4l\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:35.974875       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1736/pod-subpath-test-preprovisionedpv-qrvd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:38.011206       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-6463/local-injector\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:38.038715       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-638/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-vgpxq\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:38.471215       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9950/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-8h6rr\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:39.391656       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-1404/test-pod\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:41.541770       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2986/pod-9fefac5f-5e4a-4fdc-b0f7-5b9339df64dd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:42.125004       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-1263/metadata-volume-50427f8a-f2fc-481e-b034-29dde84c3f98\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:43.935559       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6118/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-rrln6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:44.111305       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9950/pod-283a7571-63fc-48d0-bb94-20d7ed2dd48c\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:48.119335       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1238/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-7mqtw\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:48.235233       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4559/pod-submit-remove-3ede9e89-46bb-4bff-8905-49397018b648\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:48.323790       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-4738/pod-5718040c-8445-43dc-85e6-7e3416bd1301\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:49.327548       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-3638/pod-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:50.613127       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9950/pod-79bf8e0a-25a7-459f-8d6a-92ff6f7ff209\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:51.169359       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-6463/local-client\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:51.910847       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-3871/e2e-test-httpd-pod\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:52.960523       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-638/pod-subpath-test-preprovisionedpv-d6nb\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:53.202698       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-test-2990/busybox-privileged-true-9d2fd5ca-5327-439f-95f3-987fc95e3648\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:55.801442       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1255/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-czhhj\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:21:58.140140       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-611/dns-test-dc515676-a8ce-4bd3-af8e-054c3ea91b11\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:21:58.309018       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2820/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-78gnk\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:01.628643       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4787-9313/csi-mockplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:01.794628       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9741/inline-volume-9p8qq\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-9p8qq-my-volume\\\" not found.\"\nI1002 23:22:02.128798       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4787-9313/csi-mockplugin-attacher-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:03.421479       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8213-1480/csi-mockplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:03.913781       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8213-1480/csi-mockplugin-resizer-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:04.070343       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2820/pod-11d3393c-fb01-40c4-80c2-112b9245b283\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:04.281811       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-8606/pod-1becf8c1-0b98-4440-a5e6-f1e482fecf9e\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:06.007197       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1255/pod-subpath-test-preprovisionedpv-hlr9\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:06.352956       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9348-6937/csi-hostpathplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:07.087562       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-6118/pod-subpath-test-preprovisionedpv-l6w6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:08.060247       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"secrets-4966/pod-secrets-6582fe3a-1a80-4130-8343-5173db95a2e3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:08.316827       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4787/pvc-volume-tester-zq5dx\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:10.246120       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-9348/pod-subpath-test-dynamicpv-cj97\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:10.384975       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3529/simpletest.deployment-76b58b9b6c-q84f5\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:10.423794       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"gc-3529/simpletest.deployment-76b58b9b6c-8dr4t\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:11.039983       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4787/inline-volume-28bxp\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:11.391820       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8213/pvc-volume-tester-hdx2c\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:14.042657       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9741-2617/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:14.127994       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-4392/pod-c90c9aed-bd81-4e65-a19b-4f68902e535f\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:14.737516       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9741/inline-volume-tester-4v6dw\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-4v6dw-my-volume-0\\\" not found.\"\nI1002 23:22:16.940591       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-933/httpd\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:16.970477       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5750/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-9w64g\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:17.117807       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9118/inline-volume-zdtfb\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-zdtfb-my-volume\\\" not found.\"\nI1002 23:22:17.928674       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9741/inline-volume-tester-4v6dw\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:20.423185       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"topology-1893/pod-2d28b500-c674-43c0-9ffc-38f2c8608cb5\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI1002 23:22:20.731912       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-5298/downward-api-6ca2efc2-5743-4550-8e4b-0f3b04bba363\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:21.927297       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"topology-1893/pod-2d28b500-c674-43c0-9ffc-38f2c8608cb5\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI1002 23:22:22.698412       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5750/pod-d357df9a-c57a-475c-8d11-ecc4f4e9877f\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:23.166250       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-8606/pod-1f44b7e5-67e9-48f0-8eea-88b49675e501\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:23.916663       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-274/nodeport-update-service-np5bz\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:23.929575       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-274/nodeport-update-service-gjrsl\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:23.937892       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"topology-1893/pod-2d28b500-c674-43c0-9ffc-38f2c8608cb5\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:24.268121       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-3277-587/csi-hostpathplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:27.398225       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-274/execpodlgvm6\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:28.136012       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-3277/pod-c93ba9f5-1a60-4d32-8e79-9603b7c08141\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:28.993597       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5750/pod-f082bf6d-8b90-42ef-9b34-6193d3b43eae\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:29.007626       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-2454/downwardapi-volume-85402545-3825-44f0-9a09-9fc81f6d5f21\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:29.209699       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-933/success\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:29.484498       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9118-4183/csi-hostpathplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:29.841014       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7724/pod-subpath-test-inlinevolume-zwrm\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:30.188874       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9118/inline-volume-tester-b7dhf\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-b7dhf-my-volume-0\\\" not found.\"\nI1002 23:22:31.940573       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-9118/inline-volume-tester-b7dhf\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:36.445896       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubelet-test-8865/busybox-scheduling-c477e70b-af2d-4145-9f3d-03f69d16ceea\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:37.425578       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8535/pod-subpath-test-dynamicpv-xk6q\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:38.740325       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-expand-3277/pod-1a5ee9fe-5e99-4470-b8b4-5e3d6d48449a\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:38.866587       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-8301/labelsupdate30880fec-c4bd-4818-a958-3e5f4a4468b6\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:40.403663       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-1439/pod-subpath-test-dynamicpv-2p7r\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:41.456258       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-8946/pod-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:41.708701       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-8946/pod-1\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:41.957928       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-8946/pod-2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:43.798609       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-745/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-6clw7\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:44.205082       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-6461/ss-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:49.618865       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-560/pod-78ca01bf-8351-4cbb-ba71-79e94765cafa\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:51.469278       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-764/configmap-client\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:54.004618       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-49/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:54.259515       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-49/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:54.483597       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-49/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:54.716499       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-49/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:22:55.238785       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-7113/downward-api-14413735-8c32-4c84-8f21-4f2fa730ddde\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:22:58.361305       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-6461/ss-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:00.506906       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-7497/pod-e2646fbe-ef9e-41a6-b42b-4b440d9e54a4\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:02.792237       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2346/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:03.040612       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2346/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:03.294079       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2346/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:03.544492       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2346/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:04.161172       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"statefulset-6461/ss-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:04.475941       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-824/downwardapi-volume-77adb182-e40d-4682-87a6-886cf9a65c72\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:05.638358       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-test-9282/busybox-privileged-false-ef14b6b9-eeaa-412a-9942-911edb9c7dcd\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:09.853427       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-2961/pod-projected-configmaps-18c050c7-0ebd-4826-ad59-7ca2b4b910ea\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:10.243965       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"downward-api-7619/downwardapi-volume-a9976b52-f611-4dc0-abbe-a9be74aebf65\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:12.488157       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-3815/deployment-55649fd747-6wndv\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:12.494789       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-3815/deployment-55649fd747-vrkm2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:12.494975       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-3815/deployment-55649fd747-7n4jk\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:12.721152       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-3815/deployment-55649fd747-jnds5\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:12.728351       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"apply-3815/deployment-55649fd747-xz9sx\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:13.007451       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5442/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-ztt5c\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:13.591279       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4164/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-vfr2m\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:14.320339       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-7195/hostpath-injector\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:14.576566       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-154/e2e-configmap-dns-server-2e862741-8566-4895-a872-623f5b6839f0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:15.363183       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-9901/startup-989ae89d-ec69-4b0b-aab0-2bad94072d7e\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:15.909453       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-8621/pod-hostip-8d247c9d-8230-4288-97e2-0519a846ce9e\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:17.307275       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-49/test-container-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:17.548206       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-49/host-test-container-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:17.579647       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-154/e2e-dns-utils\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:17.834584       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-6900/pod-de58644f-580d-46af-9cc0-c868317473b6\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:18.290028       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"endpointslice-7294/pod1\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:18.537858       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"endpointslice-7294/pod2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:20.893836       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-805-1114/csi-hostpathplugin-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:21.084964       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5442/pod-subpath-test-preprovisionedpv-252z\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:21.358775       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"ephemeral-805/inline-volume-tester-h6qcj\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:23.246019       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-3305/nodeport-test-hjd9n\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:23.246306       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-3305/nodeport-test-qj7xb\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:24.644650       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4986-511/csi-hostpathplugin-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:27.240712       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"e2e-privileged-pod-5997/privileged-pod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:27.623535       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-7195/hostpath-client\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:27.623815       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-4105/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-hm8rk\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:27.839102       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"projected-602/pod-projected-configmaps-253e3577-f03f-404c-9f36-c5e8182af480\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:28.062241       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2346/test-container-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:28.313178       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-2346/host-test-container-pod\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:28.514413       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-4986/pod-subpath-test-dynamicpv-rtpl\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:29.788598       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-3305/execpodmt97v\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:32.243322       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-74/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:32.496389       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-74/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:32.752723       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-74/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:32.919715       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9901-9823/csi-mockplugin-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:33.006098       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-74/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:33.161526       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9901-9823/csi-mockplugin-attacher-0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:34.632321       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-270/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-k29r5\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:35.788880       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8105/pod-subpath-test-inlinevolume-m6hv\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:40.383999       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9901/pvc-volume-tester-gcssw\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:42.018249       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-270/pod-494bca6f-07dc-40fc-a71e-c0a134a49298\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:43.415333       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4731/aws-injector\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:44.995667       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-294/pod1\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:46.613874       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7878-9591/csi-mockplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:47.084779       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7878-9591/csi-mockplugin-attacher-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:48.452869       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-270/pod-854cde22-260e-407f-a78f-9a6d2184caa9\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:48.943432       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-294/pod2\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:49.826583       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7951/pod-subpath-test-inlinevolume-fcqd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:52.108684       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-4105/pod-1afd8b4f-d72c-411c-a476-bade65bacf97\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:55.140661       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-294/execpod8wsbz\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:55.288730       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-4105/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-gmgkp\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:55.510140       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-74/test-container-pod\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:56.029061       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-454/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-rcfcb\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:56.426519       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-2366/sample-webhook-deployment-78988fc6cd-fbt7v\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:23:58.226892       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-4708/netserver-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:58.507692       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-4708/netserver-1\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:58.749401       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-4708/netserver-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:59.001259       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-4708/netserver-3\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:23:59.020609       1 factory.go:381] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-7878/pvc-volume-tester-2vv8c\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity/selector.\"\nI1002 23:23:59.656140       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-5400/pod-qos-class-c4ce5776-baed-4572-8d27-42e774331747\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:01.903223       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-454/pod-a3c24312-7526-40ac-8b4b-2eccfa95dc42\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:01.910689       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-8307/pod-2669cbd5-6987-49a8-bbbb-479a75d719c5\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:02.334978       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-7404/sample-webhook-deployment-78988fc6cd-vmnk2\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:03.158870       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"secrets-7523/pod-secrets-8c65987d-ad30-40a4-a1c1-1dc264799294\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:04.299801       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"port-forwarding-3063/pfpod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:04.685019       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4731/aws-client\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:07.110787       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-8061/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-c25qq\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:07.308220       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5236/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-txkgs\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:07.968589       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-5010/externalsvc-m2mn8\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:07.968831       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-5010/externalsvc-hjkrg\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:08.233782       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-454/pod-26c256ff-1ee7-4955-ab21-0a11512af2b0\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:09.391227       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-6797/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-kcvst\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:12.492455       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-2926/pod-bbc796e4-9b4b-4a2b-8f6b-83c113ce94eb\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:15.177025       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-5010/execpod4b9zn\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:17.783915       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7252/pod-subpath-test-inlinevolume-drxt\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:19.069554       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"security-context-test-5703/implicit-root-uid\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:21.525947       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"nettest-4708/test-container-pod\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:21.692039       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-8061/exec-volume-test-preprovisionedpv-kbfz\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:21.802695       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5236/pod-subpath-test-preprovisionedpv-fxxp\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:21.976856       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-6797/pod-38fe7c05-1568-4d8e-ac16-46f20ece4103\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:22.744108       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-2318/aws-injector\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:23.268855       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"port-forwarding-4959/pfpod\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:23.838759       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"subpath-3258/pod-subpath-test-configmap-rt55\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:24.373422       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3212/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-2rv2m\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:24.804587       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1509/test-deployment-855f7994f9-gd4tm\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:24.811476       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1509/test-deployment-855f7994f9-z9xks\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:24.935461       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"svcaccounts-6275/test-pod-6ae88686-e409-4b02-8165-09a49c910b60\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:24.941232       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8999/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-2z89x\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:25.032914       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-7640/aws-injector\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.850250       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-btlc5\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.870776       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-g7hsc\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.871129       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-w4fp9\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.887801       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-nqkt4\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.887886       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-x9rl9\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.887949       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-xskx6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.887999       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-wpgrx\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.912573       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-h42tb\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.912849       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-swr94\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:26.912915       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-ngwqw\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:27.240836       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-6797/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-2jqg6\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:27.393517       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9232-5809/csi-mockplugin-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:27.578186       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1509/test-deployment-56c98d85f9-7hsqb\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:27.875963       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9232-5809/csi-mockplugin-attacher-0\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:30.898525       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3220/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-h7shf\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:32.630758       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1509/test-deployment-56c98d85f9-krsbt\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:32.642078       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1509/test-deployment-d4dfddfbf-fl8vt\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:34.801672       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-5236/pod-subpath-test-preprovisionedpv-fxxp\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:35.028547       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-3630/liveness-71674b3f-617d-4718-9691-703fe75bf729\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:35.272429       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9232/pvc-volume-tester-5jzkq\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:36.637682       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-7163/rs-sf9gm\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:37.402059       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-8999/pod-subpath-test-preprovisionedpv-wwgr\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:39.063553       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replicaset-7544/condition-test-v7b5c\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:39.081174       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replicaset-7544/condition-test-92mmf\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:39.666817       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-5542/pod1\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:39.775097       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5791/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-s2c8t\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:40.293315       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-1-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:40.293763       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-0-0\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:40.293836       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-0\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:41.777206       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1509/test-deployment-d4dfddfbf-n5s4s\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:42.088253       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"prestop-5975/server\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:44.166730       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3220/pod-817f6895-7184-4b64-b60e-412d0c82a92e\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:47.200960       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:47.673542       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-5542/execpodtmlz5\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:48.207258       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-1-1\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:50.995806       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-49/foo--1-tn9d9\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:51.004499       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"job-49/foo--1-8nfbd\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:51.084076       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"prestop-5975/tester\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:51.466797       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-1-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:52.784208       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3354/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-zxbjd\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:24:53.863870       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:54.506828       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-8635/httpd\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:54.752878       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-1336/exec-volume-test-inlinevolume-8vsj\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:56.136179       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-5542/pod2\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:57.694816       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"webhook-1398/sample-webhook-deployment-78988fc6cd-qkr5s\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:24:59.292818       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-3\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:00.405430       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-4034/dns-test-e6d01c45-bc36-420c-a5b2-31a6a865ffb7\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:04.118609       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-4\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:05.446822       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5073/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-2m8mr\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:05.695685       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-7610/pod-subpath-test-inlinevolume-n2cd\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:06.181438       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-5791/local-injector\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:06.844553       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-3354/pod-subpath-test-preprovisionedpv-bf68\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:06.952649       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-4463/exec-volume-test-preprovisionedpv-mw77\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:08.383060       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-7640/aws-client\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:08.526070       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"dns-1246/dns-test-b9ff5d3a-4a08-4dae-b59c-9e0900452b96\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:09.030890       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"kubectl-8635/failure-1\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:10.289765       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-5\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.207533       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-gnk2n\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.212932       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-zqvrq\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.225746       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-blqbn\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.227331       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-qp25g\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.227538       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-qljlt\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.236060       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-rhwww\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.236132       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-4rxsh\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.250365       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-4d5m7\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.258641       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-kz6ql\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:11.259351       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-75zxg\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:14.000865       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"provisioning-985/hostexec-ip-172-20-33-208.ap-south-1.compute.internal-2hcgq\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:14.312178       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-3406/hostexec-ip-172-20-54-138.ap-south-1.compute.internal-wpsbg\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:15.496336       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1703/test-new-deployment-847dcfb7fb-qpgk8\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:15.820171       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replicaset-9057/test-rs-g97x4\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:16.866693       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-6\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:16.917020       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volumemode-1931/hostexec-ip-172-20-40-74.ap-south-1.compute.internal-rs2xn\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:17.589045       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-wrapper-9915/pod-secrets-881c2fc8-d853-4006-a0ed-1de9f1677e07\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:19.221800       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1703/test-new-deployment-847dcfb7fb-cpg2k\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:19.706169       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1703/test-new-deployment-847dcfb7fb-jsd8x\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:19.714275       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-1703/test-new-deployment-847dcfb7fb-mpq4s\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:19.748693       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-9024/pod-0b4cb084-30a3-4c59-8843-983993b1d16b\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:20.268479       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5073/pod-66e424eb-f9cc-4ef0-93a7-68b3dba23e50\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:20.845271       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-7\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:21.060770       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replicaset-9057/test-rs-6jxpl\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:21.323808       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"replicaset-9057/test-rs-nl79g\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:22.247732       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-3406/exec-volume-test-preprovisionedpv-t5kz\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:22.880394       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"disruption-239/rs-ssdr4\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:22.955165       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"volume-1985/hostexec-ip-172-20-34-88.ap-south-1.compute.internal-frlft\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI1002 23:25:23.002002       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"container-probe-7658/busybox-82987546-1c8d-435a-8b4f-23df2de45cd9\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:23.180336       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-8\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:24.894337       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"emptydir-4925/pod-8a7f968e-0514-4d36-971b-2b2d26c7a10a\" node=\"ip-172-20-34-88.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:25.401364       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-9\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:28.532047       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-878/test-rolling-update-with-lb-864fb64577-8nx8f\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:28.541584       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-878/test-rolling-update-with-lb-864fb64577-vjhsh\" node=\"ip-172-20-33-208.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI1002 23:25:28.547291       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"deployment-878/test-rolling-update-with-lb-864fb64577-2x6hm\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI1002 23:25:28.859606       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-10\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:30.346075       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"services-4935/hairpin\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:30.514221       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"configmap-793/pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:32.328655       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"pods-4563/pod-submit-status-2-11\" node=\"ip-172-20-54-138.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI1002 23:25:32.483927       1 scheduler.go:672] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5073/pod-9a26ab19-f5d8-4ab9-a3bf-15937a580bdc\" node=\"ip-172-20-40-74.ap-south-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\n==== END logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-45-140.ap-south-1.compute.internal ====\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"18019\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"37562\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"37579\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"37583\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"37585\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"37589\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"37597\"\n    },\n    \"items\": []\n}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:25:35.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5820" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl cluster-info dump
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1087
    should check if cluster-info dump succeeds
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1088
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds","total":-1,"completed":39,"skipped":317,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:25:35.962: INFO: Only supported for providers [vsphere] (not aws)
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:25:35.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9430" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":23,"skipped":176,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:25:28.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
STEP: Creating configMap with name configmap-test-volume-9bdbf4a9-8b35-460f-847e-c4edd1799b2c
STEP: Creating a pod to test consume configMaps
Oct  2 23:25:30.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30" in namespace "configmap-793" to be "Succeeded or Failed"
Oct  2 23:25:30.872: INFO: Pod "pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30": Phase="Pending", Reason="", readiness=false. Elapsed: 237.222307ms
Oct  2 23:25:33.110: INFO: Pod "pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474308089s
Oct  2 23:25:35.348: INFO: Pod "pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.71227741s
Oct  2 23:25:37.585: INFO: Pod "pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.949272291s
Oct  2 23:25:39.823: INFO: Pod "pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.187665591s
STEP: Saw pod success
Oct  2 23:25:39.823: INFO: Pod "pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30" satisfied condition "Succeeded or Failed"
Oct  2 23:25:40.060: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 23:25:40.544: INFO: Waiting for pod pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30 to disappear
Oct  2 23:25:40.781: INFO: Pod pod-configmaps-426dfaee-a58f-49b6-ae71-f904aac66b30 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 13 lines ...
Oct  2 23:25:35.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  2 23:25:37.499: INFO: Waiting up to 5m0s for pod "security-context-0ed41ee7-c6ba-4086-9f2f-7472022c6526" in namespace "security-context-7423" to be "Succeeded or Failed"
Oct  2 23:25:37.750: INFO: Pod "security-context-0ed41ee7-c6ba-4086-9f2f-7472022c6526": Phase="Pending", Reason="", readiness=false. Elapsed: 251.703286ms
Oct  2 23:25:40.002: INFO: Pod "security-context-0ed41ee7-c6ba-4086-9f2f-7472022c6526": Phase="Pending", Reason="", readiness=false. Elapsed: 2.503287956s
Oct  2 23:25:42.253: INFO: Pod "security-context-0ed41ee7-c6ba-4086-9f2f-7472022c6526": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.754276635s
STEP: Saw pod success
Oct  2 23:25:42.253: INFO: Pod "security-context-0ed41ee7-c6ba-4086-9f2f-7472022c6526" satisfied condition "Succeeded or Failed"
Oct  2 23:25:42.503: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod security-context-0ed41ee7-c6ba-4086-9f2f-7472022c6526 container test-container: <nil>
STEP: delete the pod
Oct  2 23:25:43.012: INFO: Waiting for pod security-context-0ed41ee7-c6ba-4086-9f2f-7472022c6526 to disappear
Oct  2 23:25:43.262: INFO: Pod security-context-0ed41ee7-c6ba-4086-9f2f-7472022c6526 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.774 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":40,"skipped":323,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:25:43.815: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 82 lines ...
Oct  2 23:25:03.748: INFO: PersistentVolumeClaim pvc-76mmn found but phase is Pending instead of Bound.
Oct  2 23:25:05.994: INFO: PersistentVolumeClaim pvc-76mmn found and phase=Bound (2.491532376s)
Oct  2 23:25:05.994: INFO: Waiting up to 3m0s for PersistentVolume local-blxch to have phase Bound
Oct  2 23:25:06.239: INFO: PersistentVolume local-blxch found and phase=Bound (244.737477ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bf68
STEP: Creating a pod to test atomic-volume-subpath
Oct  2 23:25:06.974: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bf68" in namespace "provisioning-3354" to be "Succeeded or Failed"
Oct  2 23:25:07.219: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Pending", Reason="", readiness=false. Elapsed: 244.853355ms
Oct  2 23:25:09.464: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490241876s
Oct  2 23:25:11.709: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.735564148s
Oct  2 23:25:13.980: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Pending", Reason="", readiness=false. Elapsed: 7.006548515s
Oct  2 23:25:16.227: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Running", Reason="", readiness=true. Elapsed: 9.252950386s
Oct  2 23:25:18.473: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Running", Reason="", readiness=true. Elapsed: 11.499333547s
Oct  2 23:25:20.718: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Running", Reason="", readiness=true. Elapsed: 13.744297456s
Oct  2 23:25:22.965: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Running", Reason="", readiness=true. Elapsed: 15.990956907s
Oct  2 23:25:25.210: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Running", Reason="", readiness=true. Elapsed: 18.236517686s
Oct  2 23:25:27.456: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Running", Reason="", readiness=true. Elapsed: 20.482034608s
Oct  2 23:25:29.702: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Running", Reason="", readiness=true. Elapsed: 22.728092877s
Oct  2 23:25:31.950: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.976262869s
STEP: Saw pod success
Oct  2 23:25:31.950: INFO: Pod "pod-subpath-test-preprovisionedpv-bf68" satisfied condition "Succeeded or Failed"
Oct  2 23:25:32.195: INFO: Trying to get logs from node ip-172-20-40-74.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-bf68 container test-container-subpath-preprovisionedpv-bf68: <nil>
STEP: delete the pod
Oct  2 23:25:32.693: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bf68 to disappear
Oct  2 23:25:32.938: INFO: Pod pod-subpath-test-preprovisionedpv-bf68 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bf68
Oct  2 23:25:32.938: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bf68" in namespace "provisioning-3354"
... skipping 6 lines ...
Oct  2 23:25:33.922: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-driver-437f3d43-46e4-4a7f-8354-901859aa9acd && rm -r /tmp/local-driver-437f3d43-46e4-4a7f-8354-901859aa9acd] Namespace:provisioning-3354 PodName:hostexec-ip-172-20-40-74.ap-south-1.compute.internal-zxbjd ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  2 23:25:33.922: INFO: >>> kubeConfig: /root/.kube/config
Oct  2 23:25:35.466: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: command:   umount /tmp/local-driver-437f3d43-46e4-4a7f-8354-901859aa9acd && rm -r /tmp/local-driver-437f3d43-46e4-4a7f-8354-901859aa9acd
Oct  2 23:25:35.466: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: stdout:    ""
Oct  2 23:25:35.466: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: stderr:    "rm: cannot remove '/tmp/local-driver-437f3d43-46e4-4a7f-8354-901859aa9acd': Device or resource busy\n"
Oct  2 23:25:35.466: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: exit code: 0
Oct  2 23:25:35.466: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "command terminated with exit code 1",
        },
        Code: 1,
    }
... skipping 37 lines ...
Oct  2 23:25:35.963: INFO: At 2021-10-02 23:25:08 +0000 UTC - event for pod-subpath-test-preprovisionedpv-bf68: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Oct  2 23:25:35.963: INFO: At 2021-10-02 23:25:08 +0000 UTC - event for pod-subpath-test-preprovisionedpv-bf68: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Created: Created container init-volume-preprovisionedpv-bf68
Oct  2 23:25:35.963: INFO: At 2021-10-02 23:25:09 +0000 UTC - event for pod-subpath-test-preprovisionedpv-bf68: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Started: Started container init-volume-preprovisionedpv-bf68
Oct  2 23:25:35.963: INFO: At 2021-10-02 23:25:09 +0000 UTC - event for pod-subpath-test-preprovisionedpv-bf68: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct  2 23:25:35.963: INFO: At 2021-10-02 23:25:09 +0000 UTC - event for pod-subpath-test-preprovisionedpv-bf68: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Created: Created container test-container-subpath-preprovisionedpv-bf68
Oct  2 23:25:35.963: INFO: At 2021-10-02 23:25:09 +0000 UTC - event for pod-subpath-test-preprovisionedpv-bf68: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Started: Started container test-container-subpath-preprovisionedpv-bf68
Oct  2 23:25:35.963: INFO: At 2021-10-02 23:25:32 +0000 UTC - event for pod-subpath-test-preprovisionedpv-bf68: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} FailedSync: error determining status: rpc error: code = Unknown desc = failed to get sandbox ip: check network namespace closed: remove netns: unlinkat /var/run/netns/cni-48889d70-77e2-f2e1-1ad4-126b29286558: device or resource busy
Oct  2 23:25:35.963: INFO: At 2021-10-02 23:25:35 +0000 UTC - event for hostexec-ip-172-20-40-74.ap-south-1.compute.internal-zxbjd: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Killing: Stopping container agnhost-container
Oct  2 23:25:36.208: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct  2 23:25:36.208: INFO: 
Oct  2 23:25:36.455: INFO: 
Logging node info for node ip-172-20-33-208.ap-south-1.compute.internal
Oct  2 23:25:36.700: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-33-208.ap-south-1.compute.internal    8079af1f-fbd3-4ed5-93ab-41a6d4c75326 35776 0 2021-10-02 23:04:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kops.k8s.io/instancegroup:nodes-ap-south-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-33-208.ap-south-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-south-1a topology.hostpath.csi/node:ip-172-20-33-208.ap-south-1.compute.internal topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-049e8578446ca957f"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-10-02 23:04:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kops-controller Update v1 2021-10-02 23:04:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-10-02 23:04:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2021-10-02 23:09:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2021-10-02 23:17:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-049e8578446ca957f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3910443008 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3805585408 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-02 23:04:04 +0000 UTC,LastTransitionTime:2021-10-02 23:04:04 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-02 23:24:32 +0000 UTC,LastTransitionTime:2021-10-02 23:04:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-02 23:24:32 +0000 UTC,LastTransitionTime:2021-10-02 23:04:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-02 23:24:32 +0000 UTC,LastTransitionTime:2021-10-02 23:04:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-02 23:24:32 +0000 UTC,LastTransitionTime:2021-10-02 23:04:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.33.208,},NodeAddress{Type:ExternalIP,Address:13.232.20.209,},NodeAddress{Type:Hostname,Address:ip-172-20-33-208.ap-south-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-33-208.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-232-20-209.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec27b3aec05d305a050a518e01e80b9a,SystemUUID:EC27B3AE-C05D-305A-050A-518E01E80B9A,BootID:65cd5698-972e-4500-8870-7e5f6565be5d,KernelVersion:3.10.0-1160.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.9 (Maipo),ContainerRuntimeVersion:containerd://1.4.10,KubeletVersion:v1.22.2,KubeProxyVersion:v1.22.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.2],SizeBytes:105455305,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:732da7df530f4ad1923a7c71b927b0d964e596c622de68c1c6179fb7148704fd k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.2.1],SizeBytes:66930652,},ContainerImage{Names:[docker.io/library/nginx@sha256:765e51caa9e739220d59c7f7a75508e77361b441dccf128483b7f5cce8306652 docker.io/library/nginx:latest],SizeBytes:53799606,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-4897^bea26f33-23d5-11ec-93ba-d6ca9436d200 kubernetes.io/csi/ebs.csi.aws.com^vol-0594a9b669b028f33 kubernetes.io/csi/ebs.csi.aws.com^vol-08c47da5f61fb44ce kubernetes.io/csi/ebs.csi.aws.com^vol-0ab1a3d4c2a8111aa],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0594a9b669b028f33,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-08c47da5f61fb44ce,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0ab1a3d4c2a8111aa,DevicePath:,},},Config:nil,},}
... skipping 225 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230

      Oct  2 23:25:35.466: Unexpected error:
          <exec.CodeExitError>: {
              Err: {
                  s: "command terminated with exit code 1",
              },
              Code: 1,
          }
          command terminated with exit code 1
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:250
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":27,"skipped":179,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:25:45.022: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 43 lines ...
Oct  2 23:25:34.000: INFO: PersistentVolumeClaim pvc-ntqv6 found but phase is Pending instead of Bound.
Oct  2 23:25:36.248: INFO: PersistentVolumeClaim pvc-ntqv6 found and phase=Bound (6.984998369s)
Oct  2 23:25:36.248: INFO: Waiting up to 3m0s for PersistentVolume local-lv5pd to have phase Bound
Oct  2 23:25:36.493: INFO: PersistentVolume local-lv5pd found and phase=Bound (244.708517ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-v4m2
STEP: Creating a pod to test exec-volume-test
Oct  2 23:25:37.227: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-v4m2" in namespace "volume-1985" to be "Succeeded or Failed"
Oct  2 23:25:37.473: INFO: Pod "exec-volume-test-preprovisionedpv-v4m2": Phase="Pending", Reason="", readiness=false. Elapsed: 245.926707ms
Oct  2 23:25:39.719: INFO: Pod "exec-volume-test-preprovisionedpv-v4m2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.491102307s
STEP: Saw pod success
Oct  2 23:25:39.719: INFO: Pod "exec-volume-test-preprovisionedpv-v4m2" satisfied condition "Succeeded or Failed"
Oct  2 23:25:39.963: INFO: Trying to get logs from node ip-172-20-34-88.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-v4m2 container exec-container-preprovisionedpv-v4m2: <nil>
STEP: delete the pod
Oct  2 23:25:40.464: INFO: Waiting for pod exec-volume-test-preprovisionedpv-v4m2 to disappear
Oct  2 23:25:40.710: INFO: Pod exec-volume-test-preprovisionedpv-v4m2 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-v4m2
Oct  2 23:25:40.710: INFO: Deleting pod "exec-volume-test-preprovisionedpv-v4m2" in namespace "volume-1985"
... skipping 52 lines ...
Oct  2 23:25:34.469: INFO: PersistentVolumeClaim pvc-d4jv5 found but phase is Pending instead of Bound.
Oct  2 23:25:36.721: INFO: PersistentVolumeClaim pvc-d4jv5 found and phase=Bound (16.010905108s)
Oct  2 23:25:36.721: INFO: Waiting up to 3m0s for PersistentVolume local-54hlm to have phase Bound
Oct  2 23:25:36.971: INFO: PersistentVolume local-54hlm found and phase=Bound (250.517346ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9q9q
STEP: Creating a pod to test subpath
Oct  2 23:25:37.729: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9q9q" in namespace "provisioning-985" to be "Succeeded or Failed"
Oct  2 23:25:37.980: INFO: Pod "pod-subpath-test-preprovisionedpv-9q9q": Phase="Pending", Reason="", readiness=false. Elapsed: 250.875787ms
Oct  2 23:25:40.232: INFO: Pod "pod-subpath-test-preprovisionedpv-9q9q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502285906s
Oct  2 23:25:42.483: INFO: Pod "pod-subpath-test-preprovisionedpv-9q9q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.753093056s
STEP: Saw pod success
Oct  2 23:25:42.483: INFO: Pod "pod-subpath-test-preprovisionedpv-9q9q" satisfied condition "Succeeded or Failed"
Oct  2 23:25:42.738: INFO: Trying to get logs from node ip-172-20-33-208.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-9q9q container test-container-subpath-preprovisionedpv-9q9q: <nil>
STEP: delete the pod
Oct  2 23:25:43.253: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9q9q to disappear
Oct  2 23:25:43.510: INFO: Pod pod-subpath-test-preprovisionedpv-9q9q no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9q9q
Oct  2 23:25:43.510: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9q9q" in namespace "provisioning-985"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":17,"skipped":97,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":36,"skipped":267,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:25:46.792: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 78 lines ...
• [SLOW TEST:19.334 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1007
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":19,"skipped":125,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] Pod Disks
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: Destroying namespace "pod-disks-5097" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.758 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 172 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":18,"skipped":100,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:25:55.758: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 113 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-12fc1c8f-9f98-4e0f-b655-d440483483ef
STEP: Creating secret with name secret-projected-all-test-volume-17dfdf64-4cde-4b36-a5f4-db90feab78cb
STEP: Creating a pod to test Check all projections for projected volume plugin
Oct  2 23:25:51.985: INFO: Waiting up to 5m0s for pod "projected-volume-03f44907-beb6-4209-8ac2-03a26bfdb381" in namespace "projected-2065" to be "Succeeded or Failed"
Oct  2 23:25:52.223: INFO: Pod "projected-volume-03f44907-beb6-4209-8ac2-03a26bfdb381": Phase="Pending", Reason="", readiness=false. Elapsed: 237.274647ms
Oct  2 23:25:54.460: INFO: Pod "projected-volume-03f44907-beb6-4209-8ac2-03a26bfdb381": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.475036268s
STEP: Saw pod success
Oct  2 23:25:54.461: INFO: Pod "projected-volume-03f44907-beb6-4209-8ac2-03a26bfdb381" satisfied condition "Succeeded or Failed"
Oct  2 23:25:54.699: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod projected-volume-03f44907-beb6-4209-8ac2-03a26bfdb381 container projected-all-volume-test: <nil>
STEP: delete the pod
Oct  2 23:25:55.182: INFO: Waiting for pod projected-volume-03f44907-beb6-4209-8ac2-03a26bfdb381 to disappear
Oct  2 23:25:55.419: INFO: Pod projected-volume-03f44907-beb6-4209-8ac2-03a26bfdb381 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 15 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:25:51.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83e6cd76-7f65-407e-8680-add1d32a69e5" in namespace "downward-api-646" to be "Succeeded or Failed"
Oct  2 23:25:52.132: INFO: Pod "downwardapi-volume-83e6cd76-7f65-407e-8680-add1d32a69e5": Phase="Pending", Reason="", readiness=false. Elapsed: 250.242927ms
Oct  2 23:25:54.384: INFO: Pod "downwardapi-volume-83e6cd76-7f65-407e-8680-add1d32a69e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.502642227s
STEP: Saw pod success
Oct  2 23:25:54.384: INFO: Pod "downwardapi-volume-83e6cd76-7f65-407e-8680-add1d32a69e5" satisfied condition "Succeeded or Failed"
Oct  2 23:25:54.635: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod downwardapi-volume-83e6cd76-7f65-407e-8680-add1d32a69e5 container client-container: <nil>
STEP: delete the pod
Oct  2 23:25:55.144: INFO: Waiting for pod downwardapi-volume-83e6cd76-7f65-407e-8680-add1d32a69e5 to disappear
Oct  2 23:25:55.395: INFO: Pod downwardapi-volume-83e6cd76-7f65-407e-8680-add1d32a69e5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.531 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":161,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume"]}

S
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":275,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:25:55.920: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 88 lines ...
Oct  2 23:25:45.641: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2c85ae9e-03f9-4c96-be30-f69593e94aa7] Namespace:persistent-local-volumes-test-5073 PodName:hostexec-ip-172-20-40-74.ap-south-1.compute.internal-2m8mr ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  2 23:25:45.641: INFO: >>> kubeConfig: /root/.kube/config
Oct  2 23:25:47.230: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: command:   rm -r /tmp/local-volume-test-2c85ae9e-03f9-4c96-be30-f69593e94aa7
Oct  2 23:25:47.230: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: stdout:    ""
Oct  2 23:25:47.230: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: stderr:    "rm: cannot remove '/tmp/local-volume-test-2c85ae9e-03f9-4c96-be30-f69593e94aa7': Device or resource busy\n"
Oct  2 23:25:47.230: INFO: exec ip-172-20-40-74.ap-south-1.compute.internal: exit code: 0
Oct  2 23:25:47.231: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "command terminated with exit code 1",
        },
        Code: 1,
    }
... skipping 34 lines ...
Oct  2 23:25:47.467: INFO: At 2021-10-02 23:25:32 +0000 UTC - event for pod-9a26ab19-f5d8-4ab9-a3bf-15937a580bdc: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-5073/pod-9a26ab19-f5d8-4ab9-a3bf-15937a580bdc to ip-172-20-40-74.ap-south-1.compute.internal
Oct  2 23:25:47.467: INFO: At 2021-10-02 23:25:33 +0000 UTC - event for pod-9a26ab19-f5d8-4ab9-a3bf-15937a580bdc: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Created: Created container write-pod
Oct  2 23:25:47.467: INFO: At 2021-10-02 23:25:33 +0000 UTC - event for pod-9a26ab19-f5d8-4ab9-a3bf-15937a580bdc: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Started: Started container write-pod
Oct  2 23:25:47.467: INFO: At 2021-10-02 23:25:33 +0000 UTC - event for pod-9a26ab19-f5d8-4ab9-a3bf-15937a580bdc: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Oct  2 23:25:47.467: INFO: At 2021-10-02 23:25:40 +0000 UTC - event for pod-66e424eb-f9cc-4ef0-93a7-68b3dba23e50: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Killing: Stopping container write-pod
Oct  2 23:25:47.467: INFO: At 2021-10-02 23:25:40 +0000 UTC - event for pod-9a26ab19-f5d8-4ab9-a3bf-15937a580bdc: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} Killing: Stopping container write-pod
Oct  2 23:25:47.467: INFO: At 2021-10-02 23:25:41 +0000 UTC - event for pod-66e424eb-f9cc-4ef0-93a7-68b3dba23e50: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "2bc9c119-51ff-49c8-954c-90be12337cfb" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to remove network namespace for sandbox \"c27acb6f8a3116e4af3082468e53a4e128fb303b790e9847ab1a9e82b8b58143\": failed to remove netns: unlinkat /run/netns/cni-cd281a90-70c0-057d-cccd-231b9e0f3193: device or resource busy"
Oct  2 23:25:47.467: INFO: At 2021-10-02 23:25:41 +0000 UTC - event for pod-9a26ab19-f5d8-4ab9-a3bf-15937a580bdc: {kubelet ip-172-20-40-74.ap-south-1.compute.internal} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "30111e6e-8d16-4d7f-a893-2232c8818608" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to remove network namespace for sandbox \"985f0283ee9151c0118a58fabd1343a8b53a8225b3b3dce01a88e65789ab3c8c\": failed to remove netns: unlinkat /run/netns/cni-441c8c85-4736-e416-3c16-38343fceb575: device or resource busy"
Oct  2 23:25:47.702: INFO: POD                                                         NODE                                         PHASE    GRACE  CONDITIONS
Oct  2 23:25:47.702: INFO: hostexec-ip-172-20-40-74.ap-south-1.compute.internal-2m8mr  ip-172-20-40-74.ap-south-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:05 +0000 UTC  }]
Oct  2 23:25:47.702: INFO: pod-66e424eb-f9cc-4ef0-93a7-68b3dba23e50                    ip-172-20-40-74.ap-south-1.compute.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:20 +0000 UTC  }]
Oct  2 23:25:47.702: INFO: pod-9a26ab19-f5d8-4ab9-a3bf-15937a580bdc                    ip-172-20-40-74.ap-south-1.compute.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-02 23:25:32 +0000 UTC  }]
Oct  2 23:25:47.702: INFO: 
Oct  2 23:25:47.961: INFO: 
... skipping 225 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time [AfterEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249

      Oct  2 23:25:47.231: Unexpected error:
          <exec.CodeExitError>: {
              Err: {
                  s: "command terminated with exit code 1",
              },
              Code: 1,
          }
          command terminated with exit code 1
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:170
------------------------------
SS
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":137,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:25:55.986: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
Oct  2 23:25:55.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  2 23:25:57.343: INFO: Waiting up to 5m0s for pod "security-context-9105075b-d4c8-4863-8031-e19f0f91b51a" in namespace "security-context-7754" to be "Succeeded or Failed"
Oct  2 23:25:57.588: INFO: Pod "security-context-9105075b-d4c8-4863-8031-e19f0f91b51a": Phase="Pending", Reason="", readiness=false. Elapsed: 244.928706ms
Oct  2 23:25:59.834: INFO: Pod "security-context-9105075b-d4c8-4863-8031-e19f0f91b51a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.490213876s
STEP: Saw pod success
Oct  2 23:25:59.834: INFO: Pod "security-context-9105075b-d4c8-4863-8031-e19f0f91b51a" satisfied condition "Succeeded or Failed"
Oct  2 23:26:00.078: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod security-context-9105075b-d4c8-4863-8031-e19f0f91b51a container test-container: <nil>
STEP: delete the pod
Oct  2 23:26:00.575: INFO: Waiting for pod security-context-9105075b-d4c8-4863-8031-e19f0f91b51a to disappear
Oct  2 23:26:00.820: INFO: Pod security-context-9105075b-d4c8-4863-8031-e19f0f91b51a no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.452 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":19,"skipped":114,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:8.357 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should create pods for an Indexed job with completion indexes and specified hostname
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:150
------------------------------
{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname","total":-1,"completed":17,"skipped":139,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:29.208 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete jobs and pods created by cronjob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1155
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":24,"skipped":181,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:05.286: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 33 lines ...
STEP: Destroying namespace "apply-9311" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":20,"skipped":115,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:05.538: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 82 lines ...
Oct  2 23:22:01.744: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4787
Oct  2 23:22:01.991: INFO: creating *v1.StatefulSet: csi-mock-volumes-4787-9313/csi-mockplugin-attacher
Oct  2 23:22:02.233: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4787"
Oct  2 23:22:02.473: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4787 to register on node ip-172-20-34-88.ap-south-1.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Oct  2 23:22:11.408: INFO: Error getting logs for pod inline-volume-28bxp: the server rejected our request for an unknown reason (get pods inline-volume-28bxp)
Oct  2 23:22:11.649: INFO: Deleting pod "inline-volume-28bxp" in namespace "csi-mock-volumes-4787"
Oct  2 23:22:11.891: INFO: Wait up to 5m0s for pod "inline-volume-28bxp" to be fully deleted
STEP: Deleting the previously created pod
Oct  2 23:25:28.373: INFO: Deleting pod "pvc-volume-tester-zq5dx" in namespace "csi-mock-volumes-4787"
Oct  2 23:25:28.622: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zq5dx" to be fully deleted
STEP: Checking CSI driver logs
Oct  2 23:25:49.348: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Oct  2 23:25:49.348: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Oct  2 23:25:49.348: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-zq5dx
Oct  2 23:25:49.348: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-4787
Oct  2 23:25:49.348: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 99a22211-d005-4eff-8641-610ee8825a44
Oct  2 23:25:49.348: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-1ba47bc1b30a250f842d486e0097bb6dbf98149afcb3e5bb691501abe65e11f1","target_path":"/var/lib/kubelet/pods/99a22211-d005-4eff-8641-610ee8825a44/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-zq5dx
Oct  2 23:25:49.348: INFO: Deleting pod "pvc-volume-tester-zq5dx" in namespace "csi-mock-volumes-4787"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-4787
STEP: Waiting for namespaces [csi-mock-volumes-4787] to vanish
STEP: uninstalling csi mock driver
... skipping 51 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
Oct  2 23:26:06.729: INFO: Waiting up to 5m0s for pod "metadata-volume-1cad64d6-8646-4a3e-9214-0bf2c6d8d83f" in namespace "downward-api-5422" to be "Succeeded or Failed"
Oct  2 23:26:06.965: INFO: Pod "metadata-volume-1cad64d6-8646-4a3e-9214-0bf2c6d8d83f": Phase="Pending", Reason="", readiness=false. Elapsed: 235.866737ms
Oct  2 23:26:09.202: INFO: Pod "metadata-volume-1cad64d6-8646-4a3e-9214-0bf2c6d8d83f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473106488s
Oct  2 23:26:11.440: INFO: Pod "metadata-volume-1cad64d6-8646-4a3e-9214-0bf2c6d8d83f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.710889598s
STEP: Saw pod success
Oct  2 23:26:11.440: INFO: Pod "metadata-volume-1cad64d6-8646-4a3e-9214-0bf2c6d8d83f" satisfied condition "Succeeded or Failed"
Oct  2 23:26:11.676: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod metadata-volume-1cad64d6-8646-4a3e-9214-0bf2c6d8d83f container client-container: <nil>
STEP: delete the pod
Oct  2 23:26:12.158: INFO: Waiting for pod metadata-volume-1cad64d6-8646-4a3e-9214-0bf2c6d8d83f to disappear
Oct  2 23:26:12.394: INFO: Pod metadata-volume-1cad64d6-8646-4a3e-9214-0bf2c6d8d83f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.563 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":25,"skipped":186,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:12.884: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 87 lines ...
Oct  2 23:25:56.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct  2 23:25:57.345: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  2 23:25:57.849: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6292" in namespace "provisioning-6292" to be "Succeeded or Failed"
Oct  2 23:25:58.099: INFO: Pod "hostpath-symlink-prep-provisioning-6292": Phase="Pending", Reason="", readiness=false. Elapsed: 250.080966ms
Oct  2 23:26:00.354: INFO: Pod "hostpath-symlink-prep-provisioning-6292": Phase="Pending", Reason="", readiness=false. Elapsed: 2.504441766s
Oct  2 23:26:02.607: INFO: Pod "hostpath-symlink-prep-provisioning-6292": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.757657796s
STEP: Saw pod success
Oct  2 23:26:02.607: INFO: Pod "hostpath-symlink-prep-provisioning-6292" satisfied condition "Succeeded or Failed"
Oct  2 23:26:02.607: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6292" in namespace "provisioning-6292"
Oct  2 23:26:02.863: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6292" to be fully deleted
Oct  2 23:26:03.114: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-lbdc
STEP: Creating a pod to test subpath
Oct  2 23:26:03.367: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-lbdc" in namespace "provisioning-6292" to be "Succeeded or Failed"
Oct  2 23:26:03.619: INFO: Pod "pod-subpath-test-inlinevolume-lbdc": Phase="Pending", Reason="", readiness=false. Elapsed: 251.181727ms
Oct  2 23:26:05.869: INFO: Pod "pod-subpath-test-inlinevolume-lbdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502046067s
Oct  2 23:26:08.121: INFO: Pod "pod-subpath-test-inlinevolume-lbdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.754155596s
STEP: Saw pod success
Oct  2 23:26:08.122: INFO: Pod "pod-subpath-test-inlinevolume-lbdc" satisfied condition "Succeeded or Failed"
Oct  2 23:26:08.372: INFO: Trying to get logs from node ip-172-20-54-138.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-lbdc container test-container-volume-inlinevolume-lbdc: <nil>
STEP: delete the pod
Oct  2 23:26:08.884: INFO: Waiting for pod pod-subpath-test-inlinevolume-lbdc to disappear
Oct  2 23:26:09.134: INFO: Pod pod-subpath-test-inlinevolume-lbdc no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-lbdc
Oct  2 23:26:09.134: INFO: Deleting pod "pod-subpath-test-inlinevolume-lbdc" in namespace "provisioning-6292"
STEP: Deleting pod
Oct  2 23:26:09.384: INFO: Deleting pod "pod-subpath-test-inlinevolume-lbdc" in namespace "provisioning-6292"
Oct  2 23:26:09.887: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6292" in namespace "provisioning-6292" to be "Succeeded or Failed"
Oct  2 23:26:10.137: INFO: Pod "hostpath-symlink-prep-provisioning-6292": Phase="Pending", Reason="", readiness=false. Elapsed: 250.168767ms
Oct  2 23:26:12.389: INFO: Pod "hostpath-symlink-prep-provisioning-6292": Phase="Pending", Reason="", readiness=false. Elapsed: 2.501895257s
Oct  2 23:26:14.640: INFO: Pod "hostpath-symlink-prep-provisioning-6292": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.753656326s
STEP: Saw pod success
Oct  2 23:26:14.641: INFO: Pod "hostpath-symlink-prep-provisioning-6292" satisfied condition "Succeeded or Failed"
Oct  2 23:26:14.641: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6292" in namespace "provisioning-6292"
Oct  2 23:26:14.895: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6292" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:26:15.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6292" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":38,"skipped":300,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:15.677: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 337 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":12,"skipped":76,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:26:10.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
Oct  2 23:26:11.790: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  2 23:26:12.273: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8104" in namespace "provisioning-8104" to be "Succeeded or Failed"
Oct  2 23:26:12.513: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Pending", Reason="", readiness=false. Elapsed: 239.903907ms
Oct  2 23:26:14.756: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.483132228s
STEP: Saw pod success
Oct  2 23:26:14.756: INFO: Pod "hostpath-symlink-prep-provisioning-8104" satisfied condition "Succeeded or Failed"
Oct  2 23:26:14.756: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8104" in namespace "provisioning-8104"
Oct  2 23:26:15.003: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8104" to be fully deleted
Oct  2 23:26:15.243: INFO: Creating resource for inline volume
Oct  2 23:26:15.243: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Oct  2 23:26:15.244: INFO: Deleting pod "pod-subpath-test-inlinevolume-jz6t" in namespace "provisioning-8104"
Oct  2 23:26:15.726: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8104" in namespace "provisioning-8104" to be "Succeeded or Failed"
Oct  2 23:26:15.966: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Pending", Reason="", readiness=false. Elapsed: 239.985747ms
Oct  2 23:26:18.208: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.481902178s
STEP: Saw pod success
Oct  2 23:26:18.208: INFO: Pod "hostpath-symlink-prep-provisioning-8104" satisfied condition "Succeeded or Failed"
Oct  2 23:26:18.208: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8104" in namespace "provisioning-8104"
Oct  2 23:26:18.454: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8104" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:26:18.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8104" for this suite.
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":18,"skipped":140,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:22.694: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":81,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:24.663: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 162 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":23,"skipped":176,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:25.739: INFO: Only supported for providers [vsphere] (not aws)
... skipping 63 lines ...
• [SLOW TEST:8.438 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":211,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:26:26.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename request-timeout
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:26:27.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-3968" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":29,"skipped":211,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:28.362: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 89 lines ...
STEP: Destroying namespace "services-1802" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":24,"skipped":202,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a successful command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:500
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command","total":-1,"completed":26,"skipped":201,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":39,"skipped":315,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:32.833: INFO: Only supported for providers [openstack] (not aws)
... skipping 169 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":34,"skipped":180,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:26:25.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
• [SLOW TEST:9.058 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":35,"skipped":180,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:12.570 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":19,"skipped":153,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:35.360: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 189 lines ...
• [SLOW TEST:6.536 seconds]
[sig-network] Netpol API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:49
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":40,"skipped":345,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:39.638: INFO: Only supported for providers [openstack] (not aws)
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":25,"skipped":210,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:40.993: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 222 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":28,"skipped":256,"failed":2,"failures":["[sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume"]}

S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":161,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]}
[BeforeEach] [sig-autoscaling] DNS horizontal autoscaling
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 23:26:46.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns-autoscaling
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 108 lines ...
• [SLOW TEST:14.386 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":36,"skipped":183,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:48.595: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 213 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":14,"skipped":92,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:48.982: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:121
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.152 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:121
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]","total":-1,"completed":26,"skipped":235,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:50.338: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 47 lines ...
Oct  2 23:26:48.813: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct  2 23:26:48.813: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2683 describe pod agnhost-primary-rb2nm'
Oct  2 23:26:50.100: INFO: stderr: ""
Oct  2 23:26:50.100: INFO: stdout: "Name:         agnhost-primary-rb2nm\nNamespace:    kubectl-2683\nPriority:     0\nNode:         ip-172-20-54-138.ap-south-1.compute.internal/172.20.54.138\nStart Time:   Sat, 02 Oct 2021 23:26:42 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.1.83\nIPs:\n  IP:           100.96.1.83\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://ae81a0179d9fa731685eea1cf8f5898bfa7e4277f3320c46f7b907041c2208c8\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 02 Oct 2021 23:26:42 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7r25z (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-7r25z:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  8s    default-scheduler  Successfully assigned kubectl-2683/agnhost-primary-rb2nm to ip-172-20-54-138.ap-south-1.compute.internal\n  Normal  Pulled     8s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    8s    kubelet            Created container agnhost-primary\n  Normal  Started    8s    kubelet            Started container agnhost-primary\n"
Oct  2 23:26:50.100: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2683 describe rc agnhost-primary'
Oct  2 23:26:51.642: INFO: stderr: ""
Oct  2 23:26:51.642: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-2683\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: agnhost-primary-rb2nm\n"
Oct  2 23:26:51.643: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2683 describe service agnhost-primary'
Oct  2 23:26:53.137: INFO: stderr: ""
Oct  2 23:26:53.137: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-2683\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.67.69.3\nIPs:               100.67.69.3\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.1.83:6379\nSession Affinity:  None\nEvents:            <none>\n"
Oct  2 23:26:53.389: INFO: Running '/tmp/kubectl3530846325/kubectl --server=https://api.e2e-d1fd032b1d-00cc5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2683 describe node ip-172-20-33-208.ap-south-1.compute.internal'
Oct  2 23:26:55.998: INFO: stderr: ""
Oct  2 23:26:55.998: INFO: stdout: "Name:               ip-172-20-33-208.ap-south-1.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=ap-south-1\n                    failure-domain.beta.kubernetes.io/zone=ap-south-1a\n                    kops.k8s.io/instancegroup=nodes-ap-south-1a\n                    kubelet_cleanup=true\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-33-208.ap-south-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.medium\n                    topology.ebs.csi.aws.com/zone=ap-south-1a\n                    topology.hostpath.csi/node=ip-172-20-33-208.ap-south-1.compute.internal\n                    topology.kubernetes.io/region=ap-south-1\n                    topology.kubernetes.io/zone=ap-south-1a\nAnnotations:        csi.volume.kubernetes.io/nodeid:\n                      {\"csi-hostpath-ephemeral-401\":\"ip-172-20-33-208.ap-south-1.compute.internal\",\"ebs.csi.aws.com\":\"i-049e8578446ca957f\"}\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 02 Oct 2021 23:04:00 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-33-208.ap-south-1.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Sat, 02 Oct 2021 23:26:47 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 02 Oct 2021 23:04:04 +0000   Sat, 02 Oct 2021 23:04:04 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Sat, 02 Oct 2021 23:26:52 +0000   Sat, 02 Oct 2021 23:04:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 02 Oct 2021 23:26:52 +0000   Sat, 02 Oct 2021 23:04:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 02 Oct 2021 23:26:52 +0000   Sat, 02 Oct 2021 23:04:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 02 Oct 2021 23:26:52 +0000   Sat, 02 Oct 2021 23:04:10 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:   172.20.33.208\n  ExternalIP:   13.232.20.209\n  Hostname:     ip-172-20-33-208.ap-south-1.compute.internal\n  InternalDNS:  ip-172-20-33-208.ap-south-1.compute.internal\n  ExternalDNS:  ec2-13-232-20-209.ap-south-1.compute.amazonaws.com\nCapacity:\n  cpu:                2\n  ephemeral-storage:  50319340Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3818792Ki\n  pods:               110\nAllocatable:\n  cpu:                2\n  ephemeral-storage:  46374303668\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3716392Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 ec27b3aec05d305a050a518e01e80b9a\n  System UUID:                EC27B3AE-C05D-305A-050A-518E01E80B9A\n  Boot ID:                    65cd5698-972e-4500-8870-7e5f6565be5d\n  Kernel Version:             3.10.0-1160.el7.x86_64\n  OS Image:                   Red Hat Enterprise Linux Server 7.9 (Maipo)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.10\n  Kubelet Version:            v1.22.2\n  Kube-Proxy Version:         v1.22.2\nPodCIDR:                      100.96.4.0/24\nPodCIDRs:                     100.96.4.0/24\nProviderID:                   aws:///ap-south-1a/i-049e8578446ca957f\nNon-terminated Pods:          (20 in total)\n  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---\n  deployment-1509             test-deployment-855f7994f9-z9xks                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s\n  deployment-878              test-rolling-update-with-lb-864fb64577-vjhsh               0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s\n  ephemeral-401-1770          csi-hostpathplugin-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s\n  ephemeral-401               inline-volume-tester-2hnp7                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s\n  kube-system                 ebs-csi-node-6227k                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m\n  kube-system                 kube-proxy-ip-172-20-33-208.ap-south-1.compute.internal    100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m\n  kubectl-6637                httpd-deployment-948b4c64c-lnztt                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-4pcft       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-gq9vm       0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-hgzsv       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-jdfhf       0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-lh5gk       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-mzv5l       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-qwvrl       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-sdkm5       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-sdzgt       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  kubelet-6538                cleanup40-5eae18b6-2539-487d-bdce-e205c7fabe5d-v79k8       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  nettest-4708                netserver-0                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s\n  statefulset-3106            ss-1                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m54s\n  statefulset-7688            ss-0                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                100m (5%)  0 (0%)\n  memory             0 (0%)     0 (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\n  hugepages-1Gi      0 (0%)     0 (0%)\n  hugepages-2Mi      0 (0%)     0 (0%)\nEvents:\n  Type     Reason                   Age                From     Message\n  ----     ------                   ----               ----     -------\n  Normal   Starting                 23m                kubelet  Starting kubelet.\n  Warning  InvalidDiskCapacity      23m                kubelet  invalid capacity 0 on image filesystem\n  Normal   NodeAllocatableEnforced  23m                kubelet  Updated Node Allocatable limit across pods\n  Normal   NodeHasSufficientMemory  22m (x7 over 23m)  kubelet  Node ip-172-20-33-208.ap-south-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    22m (x7 over 23m)  kubelet  Node ip-172-20-33-208.ap-south-1.compute.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     22m (x7 over 23m)  kubelet  Node ip-172-20-33-208.ap-south-1.compute.internal status is now: NodeHasSufficientPID\n  Normal   NodeReady                22m                kubelet  Node ip-172-20-33-208.ap-south-1.compute.internal status is now: NodeReady\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1094
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":41,"skipped":363,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:10.007 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":21,"skipped":176,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:803
    apply set/view last-applied
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:838
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":27,"skipped":237,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 23:26:59.578: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 23:27:00.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1600" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":42,"skipped":364,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:14.403 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conf